1. Field of the Invention
The present invention relates to active pixel sensor cells and, more particularly, to an imaging system and method for increasing the dynamic range of an array of active pixel sensor cells.
2. Description of the Related Art
Charge-coupled devices (CCDs) have been the mainstay of conventional imaging circuits for converting a pixel of light energy into an electrical signal that represents the intensity of the light energy. In general, CCDs utilize a photogate to convert the light energy into an electrical charge, and a series of electrodes to transfer the charge collected at the photogate to an output sense node.
Although CCDs have many strengths, which include a high sensitivity and fill-factor, CCDs also suffer from a number of weaknesses. Most notable among these weaknesses, which include limited readout rates and dynamic range limitations, is the difficulty in integrating CCDs with CMOS-based microprocessors.
To overcome the limitations of CCD-based imaging circuits, more recent imaging circuits use active pixel sensor cells to convert a pixel of light energy into an electrical signal. With active pixel sensor cells, a conventional photodiode is typically combined with a number of active transistors which, in addition to forming an electrical signal, provide amplification, readout control, and reset control.
FIG. 1 shows an example of a conventional CMOS active pixel sensor cell 10. As shown in FIG. 1, cell 10 includes a photodiode 12, a reset transistor 14, whose source is connected to photodiode 12, a buffer transistor 16, whose gate is connected to photodiode 12, and a select transistor 18, whose drain is connected in series to the source of buffer transistor 16.
Operation of active pixel sensor cell 10 is performed in three steps: a reset step, where cell 10 is reset from the previous integration cycle; an image integration step, where the light energy is collected and converted into an electrical signal; and a signal readout step, where the signal is read out.
As shown in FIG. 1, during the reset step, the gate of reset transistor 14 is briefly pulsed with a reset voltage (5 volts) which resets photodiode 12 to an initial integration voltage which is approximately equal to the voltage on the drain of transistor 14 less the threshold voltage of transistor 14.
During integration, light energy, in the form of photons, strikes photodiode 12, thereby creating a number of electron-hole pairs. Photodiode 12 is designed to limit recombination between the newly formed electron-hole pairs. As a result, the photogenerated holes are attracted to the ground terminal of photodiode 12, while the photogenerated electrons are attracted to the positive terminal of photodiode 12 where each additional electron reduces the voltage on photodiode 12.
Thus, at the end of the integration period, the number of photons which were absorbed by photodiode 12 during the image integration period can be determined by subtracting the voltage at the end of the integration period from the voltage at the beginning of the integration period.
Following the image integration period, active pixel sensor cell 10 is read out by turning on select transistor 18. At this point, the reduced voltage on photodiode 12, less the threshold voltage of buffer transistor 16, is present on the source of buffer transistor 16. When select transistor 18 is turned on, the voltage on the source of buffer transistor 16 is then transferred to the source of select transistor 18. The reduced voltage on the source of select transistor 18 is detected by conventional detection circuitry.
One problem with active pixel sensor cell 10, however, is that imaging systems which utilize an array of active pixel sensor cells suffer from a limited dynamic range. Conventionally, the dynamic range is defined by the maximum number of photons that a cell 10 can collect during an integration period without saturating (exceeding the capacity of) the cell 10, and the minimum number of photons that a cell 10 can collect during the integration period that can be detected over the noise floor.
The effect of a limited dynamic range is most pronounced in images that contain both bright-light and low-light sources. In these situations, if the integration period of the array is shorted to the point where none of the bright-light information is lost, i.e., where the number of collected photons will not exceed the capacity of the cell during the integration period, then most, if not all, of the low-light information will be lost (resulting in a black image) because the collected photons will not be distinguishable over the noise level.
On the other hand, if the integration period of the array is increased to capture the low-light information, i.e., where the number of collected photons is detectable over the noise floor, then a significant portion of the bright-light information is lost (resulting in a white image) because the number of collected photons will far exceed the capacity of the cell.
One approach to solving the problem of dynamic range is to utilize a non-integrating active pixel sensor cell with a non-linear load device, such as a MOSFET-diode in weak inversion, to obtain a logarithmic response. This approach, however, has a number of drawbacks.
First, the noise in a non-integrating cell is much higher than the noise in a conventional integrating cell (such as cell 10 of FIG. 1). In a conventional integrating cell, the effect of random noise events is averaged over the integration period, while the effect of random noise events in a non-integrating cell can produce substantial distortions. Second, the exact non-linear transfer function of this type of device must be carefully calibrated to avoid variations from cell to cell and due to temperature changes.
Another approach to solving the problem of dynamic range, which is used with CCD systems, is to integrate twice: once with a short exposure and once with a long exposure. For the short exposure, the bright-light information is saved while the low-light information is discarded. Similarly, for the long exposure, the low-light information is saved while the bright-light information is discarded.
The information from the two exposures is then combined to form a composite image. The drawback with this approach, however, is that the resulting image is formed by combining image data from two different periods of time.
Thus, to successfully capture both bright-light and low-light sources in the same image, there is a need for an imaging array of active pixel sensor cells with a substantially increased dynamic range.