A CMOS imager circuit includes a focal plane array of pixels, each one of the pixels including a photosensor, for example, a photogate, photoconductor or a photodiode overlying a substrate for accumulating photo-generated charge in the underlying portion of the substrate. Each pixel has a readout circuit that includes at least an output field effect transistor formed in the substrate and a charge storage region formed on the substrate connected to the gate of an output transistor. The charge storage region may be constructed as a floating diffusion region. Each pixel may include at least one electronic device such as a transistor for transferring charge from the photosensor to the storage region and one device, also typically a transistor, for resetting the storage region to a predetermined charge level prior to charge transference.
In a CMOS imager, the active elements of a pixel perform the necessary functions of: (1) photon to charge conversion; (2) accumulation of image charge; (3) resetting the storage region to a known state before the transfer of charge to it; (4) transfer of charge to the storage region accompanied by charge amplification; (5) selection of a pixel for readout; and (6) output and amplification of a signal representing pixel charge. Photo charge may be amplified when it moves from the initial charge accumulation region to the storage region. The charge at the storage region is typically converted to a pixel output voltage by a source follower output transistor.
FIG. 1 shows a top down view of an individual four-transistor (4T) pixel 10 of a CMOS imaging device. A pixel 10 generally comprises a transfer gate 50 for transferring photoelectric charges generated in a photosensor 21 (shown as a pinned photodiode) to a floating diffusion region FD acting as a sensing node, which is in turn, electrically connected to the gate 60 of an output source follower transistor. A reset gate 40 is provided for resetting the floating diffusion region FD to a predetermined voltage in order to sense a next signal, and a row select gate 80 is provided for outputting a signal from the source follower transistor to an output terminal in response to a pixel row select signal. The various transistors are coupled to each other via their source/drain regions 22 and coupled to other elements of the imaging device via the contacts 32.
CMOS imagers of the type discussed above are generally known as discussed, for example, in U.S. Pat. No. 6,140,630, U.S. Pat. No. 6,376,868, U.S. Pat. No. 6,310,366, U.S. Pat. No. 6,326,652, U.S. Pat. No. 6,204,524 and U.S. Pat. No. 6,333,205, assigned to Micron Technology, Inc., which are hereby incorporated by reference in their entirety.
FIG. 2 shows a grid layout for a conventional pixel array 100 in which all of the pixels and their respective photosensors are identically sized. Because the pixels are identically sized and spaced, the lines connecting the pixels to circuitry (not shown) located on the periphery of the pixel array 100 may also be identically spaced. It is known that pixels in various spatial locations in the pixel array 100 have different levels of output signals for the same level of light illumination. Optical image formation is not spatially shift invariant; that is, more light is transmitted to pixels located in-line with the optical axis, such as pixels 102 located at or near the center of the pixel array 100, than is transmitted to pixels 104 not in-line with the optical axis. In general, pixels may experience a cosine roll off in incident light intensity the greater their distance from the optical axis. Therefore, pixels located out of line with the optical axis, such as corner pixels 104, will receive less light than pixels located directly in-line with the optical axis, such as center pixels 102. This phenomenon is known in the art as lens shading and may cause an image generated by the pixel array 100 to be noticeably darker in the corners than in the center.
FIG. 3 shows, by shading, the amount of light captured and displayed as brightness by each of the pixels in the pixel array shown in FIG. 2. The darker shading indicates less light being captured by the pixels farther away from the center of the pixel array than pixels closer to the center of the pixel array.
Conventional CMOS imaging devices have attempted to correct lens shading during post-processing of already-acquired image data or during image acquisition (i.e., as the image is read out from the imaging device). There is a need and desire in the art for additional image correction methods that do not require post-readout image processing or special image acquisition techniques to compensate for lens shading.