Sensor arrays are used, for example, in video cameras, and generally include a two dimensional array of pixels that is fabricated on a substrate. Each pixel includes a sensing element (e.g., a photodiode) that is capable of converting a portion of an optical (or other radiant source) image into an electronic (e.g., voltage) signal, and access circuitry that selectively couples the sensing element to control circuits dispose on a periphery of the pixel array by way of address and signal lines. The access circuitry typically includes metal address and signal lines that are supported in insulation material deposited over the upper surface of a semiconductor substrate, and positioned along the peripheral edges of the pixels to allow light to pass between the metal lines to the sensing elements through the insulation material. Most image sensors typically contain a large number (e.g., millions) of pixels which transform photons coming from a photographed scene or other optical image source into a large number of corresponding voltage signals, which are stored on a memory device and then read from the memory device and used to regenerate the optical image on, for example, a liquid crystal display (LCD) device.
One of the most important figures of merit for a camera sensor is it's dynamic range (DR), which is defined as the largest signal (in the non-saturated region) generated in the sensor corresponding to the bright areas of a scene, divided by the smallest signal which can be correctly detected (above sensor noise level) in the dark areas of the scene. Correctly capturing (i.e., “photographing”) the dynamic range in a scene is a problem which is known from the early days of photography, where photographers used to “underexpose” a photography film in order to capture high light (bright) details of a scene, and “overexpose” a film in order to observe lowlight (dark) details in the scene. Although CMOS image sensors have improved significantly in the last decade in their ability to observe details in the dark (lowlight) areas of the scene (mainly by reducing the electronic read out noise, for example, with the use of pinned diode-type photodiodes with CDS), the dynamic range of CMOS image sensors still remains well below that of the human eye in their ability to capture all details in an uncontrolled lighting environment.
A straight forward approach to increase the dynamic range of a CMOS sensor is to increase the full-well capacity of the sensor's photodiodes. This approach usually improves the quality of data captures in the brighter areas of a scene (i.e., relatively brighter areas can be captured correctly without saturating the pixel in comparison to a sensor having photodiodes with smaller full-well capacities). However, increasing the full-well capacity of the sensor's photodiodes typically degrades the sensitivity of the sensor since more photoelectrons are generally needed to overcome the intrinsic noise of the pixels and the sensor reading circuit.
What is needed is a high dynamic range CMOS images sensor in which each pixel is able to either effectively “underexpose” the pixel's photodiode when the pixel is exposed to high light (bright) details of a scene, or effectively “overexpose” the pixel's photodiode when the pixel is exposed to lowlight (dark) details of the scene, and is able to achieve this function without significantly increasing either pixel cell size or control circuit complexity.