Image sensors are devices that capture and process light into electronic signals for forming still images or video. Their use has become prevalent in a variety of consumer, industrial, and scientific applications, including digital cameras and camcorders, hand-held mobile devices, webcams, medical applications, automotive applications, games and toys, security and surveillance, pattern recognition, and automated inspection, among others. The technology used to manufacture image sensors has continued to advance at a rapid pace.
There are two main types of image sensors available today: Charge-Coupled Device (“CCD”) sensors and Complementary Metal Oxide Semiconductor (“CMOS”) sensors. Until recently, the majority of image sensors have been of the CCD type. Early CMOS sensors suffered from poor light sensitivity and high noise levels that restricted their use to only a few low-cost and low-resolution applications. Recent advances in CMOS technology have led to the development of high performance CMOS sensors that are quickly replacing CCDs in a host of other applications, particularly in those where speed, power consumption, size, and on-chip functionality are important factors.
In either type of image sensor, a light gathering photosite is formed on a substrate and arranged in a two-dimensional array. The photosites, generally referred to as picture elements or “pixels,” convert the incoming light into an electrical charge. The number, size, and spacing of the pixels determine the resolution of the images generated by the sensor. Modern image sensors typically contain millions of pixels in the pixel array to provide high-resolution images.
The electrical charges accumulated by each pixel in the pixel array are typically read out by a “readout circuit,” where they are converted into digital image samples based on the order in which the pixels in the pixel array are selected for readout. The readout circuit may include a combination of amplifiers, sample and hold circuits, analog to digital converters (“ADC”), and other circuit elements for converting the two-dimensional electrical charges into the digital image samples. The digital image samples may be further processed at an Image Signal Processor (“ISP”) or other Digital Signal Processor (“DSP”) to generate a digital image output.
Several approaches are available for selecting the order in which the pixels in the pixel array are to be read out by the readout circuit. For example, pixels in the array may be individually read out and processed sequentially. Alternatively, pixels in a row may be read out simultaneously and processed in parallel by readout circuits for each column. The processed signals are stored in a line memory, then read out sequentially. Because each readout circuit in this case processes a row at a time, their frequency and power requirements are significantly reduced. This parallel approach is used in most CMOS image sensor devices.
An example of a CMOS image sensor device employing this parallel approach is illustrated in FIG. 1. CMOS image sensor device 100 includes a pixel array 105 with a plurality of pixels arranged in a two-dimensional pattern of row and column lines. The CMOS image sensor device 100 is operated by a controller 110, which controls the selection of pixels from the pixel array 105 to be read out. All pixels in a row are turned out simultaneously and readout in parallel by a plurality of column readout circuits 115. The pixels in a row line are selected and activated by row selector circuit 120 in response to control signals from controller 110. The row selector circuit 120 applies a driving voltage to the selected row line to activate the pixels in the selected line. The pixels in the selected line are then read out by the column readout circuits 115 in response to control signals from controller 110.
Each column line is connected to a column readout circuit. The column readout circuits are, in turn, connected to pixel output stage 125. The pixel output stage 125 takes the electrical charges readout by the column readout circuits 115 and converts them into digital image samples. The samples are then processed at processor 130 for generating the digital image output 135.
An example of column readout circuits 115 connected to an output stage 125 is shown at FIG. 2. Column readout circuits 115a-b include column amplifiers 205a-b for amplifying the electrical charges readout from pixels 210a in column line 215a and the electrical charges readout from pixels 210b in column line 215b, respectively. Column readout circuits 115a-b also include sample and hold circuits 220a-b for reading out the amplified charges. The column lines 215a-b, also referred to as “bit lines,” are the lines to which all of the pixels of a given column are connected and from which the electrical charges from each pixel are read.
The electrical charges are input into pixel output stage 125, which includes second stage or global amplifier 225 for further amplification of the electrical charges and ADC 230 for converting the electrical charges into digital image samples.
A typical pixel in pixel array 105 may employ a photodetector followed by a four-transistor (“4T”) configuration as shown in FIG. 3. Pixel 300 includes a photodetector 305 followed by a transfer transistor 310, a reset transistor 315, a source follower transistor 320, and a row select transistor 325. The photodetector 305 converts the incident light into an electrical charge. The electrical charge is received by a floating diffusion region 330 through the transfer transistor 310 when the transfer transistor 310 is activated by the transfer gate control signal “TX.” The reset transistor 315 is connected between the floating diffusion region 330 and a supply voltage line 335. A reset control signal “RST” is used to activate the reset transistor 310 for resetting the floating diffusion region 330 to the supply voltage Vcc at supply voltage line 335 prior to transferring the electrical charge from photodetector 305.
The source follower transistor 320 is connected to the floating diffusion region 330 between the supply voltage line 335 and the row select transistor 325. The source follower transistor 320 converts the electrical charge stored at the floating diffusion region 330 into an output voltage “Vout.” The row select transistor 325 is controlled by a row select signal “RS” for selectively connecting the source follower transistor 320 and its output voltage Vout into a column line 340 of a pixel array.
The 4T configuration shown at FIG. 3 was introduced to improve the overall image quality produced by CMOS image sensor devices. Image quality at a CMOS image sensor device depends on a host of factors, such as, for example, the noise sources introduced by the circuitry in the sensor and the dynamic range achievable with such circuitry. The noise sources include fixed pattern noise (“FPN”) and read noise introduced by the column readout circuits, reset noise introduced by the reset transistor, photon shot noise introduced by the photodetector, and other noise sources such as dark current noise and thermal noise.
The FPN can be significantly reduced or eliminated with the use of specialized column amplifiers or by performing flat-field correction. For example, U.S. Pat. No. 6,128,039 describes a column amplifier using a switching capacitor amplifier for high FPN reduction. The reset noise can also be eliminated with the use of a technique called Correlated Double Sampling (“CDS”) at the sample and hold circuit stage of the column readout circuit. CDS samples the voltage output at the column line twice, during reset of the reset transistor and during the transfer of the output voltage at the source follower transistor. The samples are subtracted from each other thereby cancelling the reset noise. Other forms of noise, such as the photon shot noise, the dark current noise, and the thermal noise are more difficult to cancel.
Several approaches have also been proposed to improve the dynamic range of CMOS image sensors. The dynamic range is defined as the ratio of the largest detectable luminance signal to the smallest. A high dynamic range is desirable in low-light conditions and for capturing images with large variations in luminance, that is, for capturing the wide range of luminance levels found in most real-world scenes. As the dynamic range of a sensor is increased, the ability to simultaneously record the dimmest and brightest intensities in an image is improved.
The dynamic range of an image sensor is usually expressed in gray levels, decibels or bits. Image sensors having higher signal-to-noise ratios produce higher dynamic range values (more decibels or bits). In addition, image sensors having ADCs of higher bit depths also produce higher dynamic range values. For example, a 12-bit ADC corresponds to slightly over 4,000 gray levels or 72 dBs, while a 10-bit ADC can resolve only 1,000 gray levels for a 60 dB dynamic range.
Efforts for improving the dynamic range of CMOS image sensors have focused on designing improved pixel cells, column amplifiers, or readout circuits. CMOS image sensor devices employing high bit depth ADCs, such as 14-bit ADCs, are known, but these devices tend to be costly, require more complex column readout circuits, and consume a considerably higher amount of power and semiconductor die area as compared to their CCD counterparts.
Accordingly, it would be desirable to provide a CMOS image sensor apparatus that provides an improved dynamic range and better noise reduction without requiring costly and power-driven column readout circuits. In particular, it would be desirable to provide a CMOS image sensor apparatus that is capable of emulating the high dynamic ranges achievable by higher bit depth ADCs without their associated more complex column readout circuits.