Solid state imaging devices, including charge coupled devices (CCD), complementary metal oxide semiconductor (CMOS) imaging devices, and others, have been used in photo imaging applications. A solid state imaging device circuit includes a focal plane array of pixels as an image sensor, each pixel including a photosensor, which may be a photogate, photoconductor, photodiode, or other photosensor having a doped region for accumulating photo-generated charge. For CMOS imaging devices, each pixel has a charge storage region, formed on or in the substrate, which is connected to the gate of an output transistor that is part of a readout circuit. The charge storage region may be constructed as a floating diffusion region. In some CMOS imaging devices, each pixel may further include at least one electronic device such as a transistor for transferring charge from the photosensor to the storage region and one device, also typically a transistor, for resetting the storage region to a predetermined charge level.
In a CMOS imaging device, the active elements of a pixel perform the necessary functions of: (1) photon to charge conversion; (2) accumulation of image charge; (3) resetting the storage region to a known state; (4) transfer of charge to the storage region; (5) selection of a pixel for readout; and (6) output and amplification of a signal representing pixel charge. Photo charge may be amplified when it moves from the initial charge accumulation region to the storage region. The charge at the storage region is typically converted to an image value by a source follower output transistor.
FIG. 1 illustrates a block diagram of a CMOS imaging device 110 having a pixel array 112 incorporating pixels in columns and rows. The pixels of each row in pixel array 112 can all be turned on at the same time by a row select line and the pixels of each column are selectively output by a column select line. A plurality of row and column lines is provided for the entire pixel array 112. The row lines are selectively activated by a row driver 114 in response to a row address decoder 116 and the column select lines are selectively activated by a column driver 120 in response to a column address decoder 122.
The CMOS imaging device 110 is operated by a control circuit 124 which controls the address decoders 116, 122 for selecting the appropriate row and column lines for pixel image acquisition and readout, and the row and column driver circuits 114, 120 which apply driving voltage to the drive transistors of the selected row and column lines.
The column driver 120 is connected to analog processing circuitry 808, including sample-and-hold circuits that sample and hold signals from the pixel array 112 and differential amplifiers that correct image signals as described below, by a greenred/greenblue channel 132 and a red/blue channel 134. Although only two channels 132, 134 are illustrated, there are effectively two green channels, one red channel, and one blue channel, for a total of four channels. Greenred (i.e., Green1) and greenblue (i.e., Green2) signals are readout at different times (using channel 132) and the red and blue signals are readout at different times (using channel 134). The analog processing circuitry 808 outputs processed greenred/greenblue signals G1/G2 to a first analog-to-digital converter (ADC) 126 and processed red/blue signals R/B to a second analog-to-digital converter 128. The outputs of the two analog-to-digital converters 126, 128 are sent to a digital processor 830, which processes the signals to perform pixel processing, such as demosiaicing and noise reduction, and outputs, for example, a 10-bit digital signal 136.
Each column is connectable to a sampling and holding circuit in the analog processing circuit 808 that reads a pixel reset signal VRST and a pixel image signal VSIG for selected pixel circuits. A differential signal (VRST−VSIG) is produced by differential amplifiers contained in the circuitry 808 for each pixel. The resulting signals G1/G2 (on the green channel 132) and R/B (on the red/blue channel 134) are digitized by a respective analog-to-digital converter 126, 128. The analog-to-digital converters 126, 128 supply digitized G1/G2, R/B pixel signals to the digital processor 830, which forms a digital image output (e.g., a 10-bit digital output). As noted, the digital processor 830 performs pixel processing operations.
FIG. 2 illustrates a block diagram of an example pixel array 112. The pixel array 112 contains rows and columns of pixels as described above with reference to FIG. 1. Some of these pixels, shown as active area 201, are used to generate photocharges based on incident light. Pixel array 112 may be formed on a substrate and be covered by other layers containing metal lines for carrying signals and photocharges, translucent materials to allow light to pass to photosensitive elements that create photocharges, and a color filter for controlling the wavelength range for the light that reaches each pixel's photosensitive element. The color filter may be patterned as a Bayer pattern, for example, to allow one of red, green, or blue light to reach each pixel in the active area 201. The Bayer pattern is designed such that one-half of the filters allow green light to pass, while red and blue filters each comprise 25% of the filters. Active area 201 may be surrounded on any side, or on multiple sides, by columns or rows of optical black pixels, such as optical black regions 202, 203, 204. Optical black regions 202, 203, 204 are those regions that receive no light, because the color filter or other mechanism (e.g., a light shield) is configured to block visible light over those pixels, for example. One optical black region is barrier area 202, which prevents charge leakage between the active area and the surrounding dark pixels. The other optical black area shown in FIG. 2 is optical black pixel area 203. The pixels in optical black regions 202, 203 are not used to create an image; rather they are used to compensate for noise in the image. For example, dark current in a sensor, that is, current that is present without incident light, may exhibit itself in optical black pixels 203. In such a system, it is possible to partially compensate for dark current in a sensor by measuring the dark current in the optical black pixels 203 and subtracting estimated dark current values from the active area 201 based on this measurement. Because this method depends on dark current values from the extreme edges of the active area, it is a crude measurement of the dark current as it affects smaller areas. At best, methods using the optical black pixels 203 approximate dark current offset values for a row of pixels.
As discussed above, an imager device (FIG. 1) contains a pixel array, which comprises rows and columns of pixels. Each pixel, in turn, contains a photosensor that convert incident light energy into a photocharge. This photocharge is converted to an image value that is sampled and held prior to noise correction and other processing steps. FIG. 3 depicts a flowchart for a known noise correction method for these signals.
Noise correction may take place in either or both of two domains, the analog domain and the digital domain. The analog domain processes signals prior to analog-to-digital conversion. The digital domain processes the digitized pixel signals in the image processor. The analog domain consists of all process steps within the area 330, while the digital domain consists of all process steps within the area 340. The analog domain is so named because the values that are processed therein are analog values, such as the voltage signal read out from a pixel, or a voltage offset or gain. The digital domain includes those steps that operate on digital values.
Prior to exposure to light, pixels are reset by connecting the floating diffusion region FD (FIG. 5A) to a reset voltage VDD. This image reset value is read out and stored on a first sample-and-hold capacitor. After a pixel has been exposed to incident light and is read out, the resultant image value is read out along a column line to a second sample-and-hold capacitor. The image value stored at the second sample-and-hold capacitor may be subtracted from the reset value stored at the first sample-and-hold capacitor at this point (step 301).
The resulting value is then amplified by the analog gain (step 302) and is optionally adjusted by an analog offset value (step 303) before it is converted into a digital value by an analog-to-digital converter (step 304). The analog offset value is a value determined by the optically black pixels surrounding the active pixel array (FIG. 2). Once the signal voltage has been converted into a digital signal, the signal has moved from the analog domain 330 to the digital domain 340. The digital signal may be amplified by a digital gain value (step 305) and/or subjected to digital offset correction (step 306). The digital value may undergo certain other noise correction procedures, such as shading correction, which corrects for a disparity in the amount of light received by pixels at the extreme edges of the array (step 307). The digital value may then be stored in memory.
Another method of noise correction is shown in FIG. 4. As before, the image value is subtracted from the image reset value (step 401), and the result is amplified by an analog gain value (step 402). The signal produced by step 402 is then converted into a digital signal by an analog-to-digital converter (step 404). It is then determined whether the signal requires calibration in the analog domain (step 405). If so, a calibration offset value is determined (step 406) and subtracted from the analog signal (step 403), which is converted again into a digital value (step 404). If the signal does not need to be calibrated (as determined in step 405) or once the signal has been calibrated (step 403), the signal is then corrected for row-wise noise (step 407). This is based on the values of certain optical black pixels, as described above with reference to FIGS. 2 and 3. The signal is corrected by a channel offset (step 408) based on which color range of light the signal represents. The signal is corrected for so-called fixed pattern noise, or noise that is a result of relative sensitivities of different pixels to light (step 409). The signal may then be corrected for dark current (step 410). The signal may then be corrected for lens shading (step 411). The signal may then be amplified by a digital gain (step 412). Prior to being stored in memory, the value may be subjected to other forms of defect and noise correction. It will be obvious to one skilled in the art that the steps of the methods shown in FIGS. 3 and 4 may be re-ordered, and some steps may be added or removed, as the particular device or application requires.
These conventional methods for noise correction suffer from a number of defects. First, known methods correct for noise on a row-wise basis; that is, the methods will determine an amount of noise that affects a row and will correct for that amount of noise across each pixel in the row. This neglects the fact that noise may vary across a single row. In many imagers, row shading occurs, in which noise increases as a function of the column number in a row. Row-wise correction methods cannot correct for in-row noise.
Additionally, row-wise and column-wise correction methods often depend on a small set of pixels to sample noise, and then use the results to correct for noise on a larger set of pixels. If pixels within the small correction sample set are defective, the defect will affect a large number of pixels. Because the correction methods are applied linearly in rows or columns, a defective pixel sample for correction will cause obvious aberrations in an image (dark or bright rows are easily discernable to the human eye).
When using some existing noise correction methods, described above with reference to FIGS. 3 and 4, the pixels sampled to determine noise levels may be physically distant from the pixels from which the noise values are removed. The result is that any regional noise, such as noise from defects in the imager, infrared reflection, or temperature variations will not be detected by pixels that are far from the defect site. Additionally, when using rows or columns of optical black pixels to approximate dark current, a single defective or aberrant pixel in the optical black columns or rows may incorrectly influence an entire row or column in the resultant image. This leads to noticeable effects such as row-banding, in which a row of the image is noticeably brighter or darker than its surrounding rows or columns.
Accordingly, there is a need for imager devices that apply noise correction on a pixel-wise basis. Additionally, there is a need for an imager device that samples a wide variety of pixels for noise values prior to noise correction.