CMOS image sensors are increasingly being used as a lower cost alternative to CCDs. A CMOS image sensor circuit includes a focal plane array of pixel cells, each one of the cells includes a photogate, photoconductor, or photodiode having an associated a charge accumulation region within a substrate for accumulating photo-generated charge. Each pixel cell may include a transistor for transferring charge from the charge accumulation region to a sensing node, and a transistor, for resetting a sensing node to a predetermined charge level prior to charge transfer. The pixel cell may also include a source follower transistor for receiving and amplifying charge from the sensing node and an access transistor for controlling the readout of the cell contents from the source follower transistor.
In a CMOS image sensor, the active elements of a pixel cell perform the necessary functions of: (1) photon to charge conversion; (2) accumulation of image charge; (3) transfer of charge to the sensing node accompanied by charge amplification; (4) resetting the sensing node to a known state before the transfer of charge to it; (5) selection of a pixel for readout; and (6) output and amplification of a signal representing pixel charge from the sensing node.
CMOS image sensors of the type discussed above are generally known as discussed, for example, in Nixon et al., “256×256 CMOS Active Pixel Sensor Camera-on-a-Chip,” IEEE Journal of Solid-State Circuits, Vol. 31(12), pp. 2046-2050 (1996); and Mendis et al., “CMOS Active Pixel Image Sensors,” IEEE Transactions on Electron Devices, Vol. 41(3), pp. 452-453 (1994). See also U.S. Pat. Nos. 6,177,333 and 6,204,524, which describe operation of conventional CMOS image sensors, the contents of which are incorporated herein by reference.
A conventional CMOS pixel cell 10 is illustrated in FIGS. 1 and 2. FIG. 1 is a schematic top view of a portion of a semiconductor wafer fragment containing the exemplary pixel cell 10 and FIG. 2 is a circuit diagram of the pixel cell 10. The CMOS pixel cell 10 is a four transistor (4T) cell. Pixel cell 10 comprises a photo-conversion device, typically a photodiode 21, for collecting charges generated by light incident on the pixel. A transfer gate 7 transfers photoelectric charges from the photodiode 21 to a sensing node, typically a floating diffusion region 3. Floating diffusion region 3 is electrically connected to the gate of an output source follower transistor 27. The pixel cell 10 also includes a reset transistor having a gate 17 for resetting the floating diffusion region 3 to a predetermined voltage before sensing a signal; a source follower transistor 27 which receives at its gate an electrical signal from the floating diffusion region 3; and a row select transistor 37 for outputting a signal from the source follower transistor 27 to an output column line in response to an address signal.
Impurity doped source/drain regions 32 (FIG. 1), having n-type conductivity, are provided on either side of the transistor gates 17, 27, 37. Conventional processing methods are used to form, for example, contacts 33 (FIG. 1) in an insulating layer to provide an electrical connection to the source/drain regions 32, the floating diffusion region 3, and other wiring to connect to gates and form other connections in the pixel cell 10.
In the pixel cell 10 depicted in FIG. 1, electrons are generated by light incident externally and stored in the photodiode 21. These charges are transferred to the floating diffusion region 3 by the gate 7 of the transfer transistor. The source follower transistor 27 produces an output signal from the transferred charges.
Image sensors, such as an image sensor employing the conventional pixel cell 10, have a characteristic dynamic range. Dynamic range refers to the range of incident light that can be accommodated by an image sensor in a single frame of pixel data. It is desirable to have an image sensor with a high dynamic range to image scenes that generate high dynamic range incident signals, such as indoor rooms with windows to the outside, outdoor scenes with mixed shadows and bright sunshine, night-time scenes combining artificial lighting and shadows, and many others.
The dynamic range for an image sensor is commonly defined as the ratio of its largest non-saturating signal to the standard deviation of the noise under dark conditions. The dynamic range is limited on an upper end by the charge saturation level of the sensor, and on a lower end by noise imposed limitations and/or quantization limits of the analog to digital converter used to produce the digital image. When the dynamic range of an image sensor is too small to accommodate the variations in light intensities of the imaged scene, image distortion occurs.
Dynamic range in a charge coupled device (CCD) (DRCCD) can be expressed as:
      DR    CCD    =      20    ⁢          log      [                        N          sat                                                                    (                                                      σ                    output                                                        G                    0                                                  )                            2                        +                          N              dark              2                                          ]      where Nsat is the electron capacity of the CCD, σout is the RMS read noise voltage of the sensor output stage, G0 is the conversion gain, and Ndark is the dark current shot noise expressed in RMS electrons. Therefore, maximizing the conversion gain can increase the dynamic range of the CCD until the output stage saturates. See Blanksby et al., “Performance Analysis of a Color CMOS Photogate Image Sensor,” IEEE Transactions on Electron Devices, Vol. 47(1), pp. 55-64 (2000), which is incorporated herein by reference.
In a CMOS photodiode architecture, such as the pixel cell 10 shown in FIGS. 1 and 2, however, the saturation level is determined by read-out circuit considerations. The threshold voltage drops across the reset and source follower transistors 17 and 27 limit the available swing at the floating diffusion node 3. In this case the dynamic range can be expressed as:
      DR          CMOS      -      APS        =      20    ⁢          log      [                                    V            dd                    -                      V                          t              ⁡                              (                reset                )                                              -                      V                          t              ⁡                              (                                  source                  -                  follower                                )                                                                                                        (                                                      σ                    output                                                        A                    SF                                                  )                            2                        +                                          (                                                      G                    FD                                    ⁢                                      N                    dark                                                  )                            2                        +                                          (                                                      G                    FD                                    ⁢                                      N                    reset                                                  )                            2                                          ]      where Vt(reset) and Vt(source-follower) are the threshold voltages of the reset and source follower devices, respectively, ASF is the source follower gain, GFD is the conversion gain of the floating diffusion node, and NRX is the reset noise expressed in RMS electrons.
In a CMOS photodiode sensor, GFD and Ndark are typically small resulting in a large dynamic range. As the pixel dimensions are scaled down, Vdd is typically reduced, which may lead to a reduction of the dynamic range. Accordingly, techniques are needed to improve the dynamic range in image sensors, and, specifically, circuit level techniques are needed to improve gain in the signal path to achieve a high dynamic range as pixel dimensions are reduced.