CMOS imaging sensors are widely used in cameras and other imaging applications. The imaging sensors typically include a two-dimensional array of pixel sensors. Each pixel sensor includes a photodiode that measures the image intensity at a corresponding point in the image. The dynamic range of the image sensor is the ratio of the minimum amount of light that can be measured to the maximum amount. An image is formed by first emptying the photodiodes of any accumulated charge and then exposing the photodiodes to the image. Each photodiode accumulates charge at a rate determined by the light intensity emitted by the image at the corresponding point in the image, referred to as a pixel. In general, the amount of charge that can be accumulated in a photodiode has a maximum value, referred to as the maximum well capacity. Once this capacity is reached, the excess charge is removed from the pixel through a special gate that shunts the excess charge to ground to prevent artifacts in the image. The minimum charge that can be detected is determined by noise.
In principle, the maximum well capacity can be increased by utilizing larger photodiodes; however, this solution increases the cost of the imaging array and requires processing electronics that can deal with the larger dynamic range of the signals generated by the pixels. Another solution for increasing the dynamic range of the imaging array involves using two different photodiodes for each pixel. In this solution, a large area photodiode is used to measure low light levels and a smaller photodiode is used to measure intensities at the brighter locations in the image. If the pixel is exposed to a high brightness location, the smaller photodiode is used. At dim locations in the image, the larger photodiode is used. This solution requires two different sets of photodiodes and the increased silicon area associated with the additional photodiodes that measure the high brightness locations in the image.
A second solution uses multiple exposures to provide the increased dynamic range. In this solution, basically two pictures are taken of each scene. A first picture uses a very short exposure time which captures the intensities of the high brightness points in the image. Pixels at low intensity points in the image are underexposed. The second picture uses a much longer exposure period. In the second picture, the pixels at the high intensity points are overexposed, while the pixels at the low intensity points are now adequately exposed and provide the intensity values at the low intensity points. The two pictures are then combined to provide an image with increased dynamic range. This approach, however, leads to artifacts in the image, as the two pictures are separated in time by an amount that can be a problem if the scene is rapidly changing.