This relates generally to imaging devices, and more particularly, to imaging devices having multi-port image pixels.
Image sensors are commonly used in electronic devices such as cellular telephones, cameras, and computers to capture images. In a typical arrangement, an electronic device is provided with an array of image pixels arranged in pixel rows and pixel columns. The image pixels contain a photodiode for generating charge in response to image light. Circuitry is commonly coupled to each pixel column for reading out image signals from the image pixels.
In certain applications, it may be desirable to increase the dynamic range of an image sensor, which is generally limited by the highest and lowest signal levels that a photodiode in a pixel of the image sensor can generate. The saturation full well (SFW) of the photodiode generally limits the highest signal that the photodiode can generate, and noise due to dark current (DC) generally limits the lowest signal that the photodiode can generate. In some arrangements, different integration times are used for different pixels in the pixel array in an attempt to improve the dynamic range of the image sensor. In some arrangements, different photodiode geometries are used for different pixels in the pixel array in an attempt to improve the dynamic range of the image sensor.
However, using multiple pixels to detect high and low levels of light incident upon the photodiode generates location disparity in the high and low signals, as the same photodiode does not generate both the high and low signals (i.e., the photodiodes that generate the high and low signals are adjacent or otherwise spatially separated). Such spatial separation violates the spatial frequency uniformity and Nyquest cut-off frequency of the two photodiodes that form the basis for image reconstruction from the sampled signals. Using multiple different integration times within the same pixel for sampling is also difficult to implement because the maximum amount of time for which charge can be generated by the pixels and the time sequencing requirements in a given row are tied to the frame readout time limit to prevent motion blur (i.e., the maximum integration time cannot be too high, or else motion artifacts will occur). Lowering the integration time to avoid such artifacts reduces the sensitivity of the image sensor to low light levels.
It would therefore be desirable to be able to provide imaging systems with improved dynamic range.