Imagers typically consist of a single array of pixel cells containing photosensors, where each pixel cell produces a signal corresponding to the intensity of light impinging on its photosensor when an image is focused on the array by one or more lenses. These signals may then be stored, for example, to display a corresponding image on a monitor or otherwise used to provide information about the optical image. The magnitude of the signal produced by each pixel is substantially proportional to the amount of light impinging on the photosensor.
To allow the imager to capture a color image, the pixel cells must be able to separately detect red (R) light, green (G) light, and blue (B) light. A color filter array is typically placed in front of the array of pixel cells so that each pixel cell measures only light of the color of its associated filter.
Alternatively, an imager may comprise a plurality of pixel cell arrays. Each array is often referred to as a “sub-array” and the imager is itself often referred to as a “multi-array” imager. Each sub-array is typically sensitive to only one color of light. The images from each of the sub-arrays are combined to form a full-color image. Such imagers are disclosed in U.S. patent application Ser. No. 11/367,580, Ser. No. 11/540,673, and Ser. No. 11/642,867, all assigned to Micron Technology, Inc. and incorporated herein by reference in their entirety.
Multi-array imagers offer numerous advantages over conventional single-array imagers. For example, because each sub-array is typically sensitive to only a single color, crosstalk caused by adjacent Bayer patterned different colored pixels is reduced, thereby improving overall color performance and removing color shading artifacts. Moreover, the design of the lens used to focus light on each sub-array may be simplified because each lens is only required to operate over the relatively narrow portion of the spectrum detectable by its respective sub-array.
While they offer numerous advantages, multi-array imagers suffer from parallax error due to the side-by-side arrangement of the sub-arrays. In particular, imagers comprising a plurality of sub-arrays arranged in a linear configuration suffer from linear shift parallax error, where imaged objects appear at different locations along an axis of each sub-array. FIG. 1 illustrates linear shift parallax error. Light from an object 1 to be imaged passes through lenses 2, 3, and 4 associated with sub-arrays 5, 6, and 7, respectively. Light from the object 1 and thus an image of the object 1 registers in a different position along the x-axis of each sub-array. Sub-array 6 registers the image of object 1 at position X0. Sub-array 7 registers the image of object 1 at a different position X− shifted in one direction from position X0. Sub-array 5 registers the image of object 1 at a yet another position X+ shifted in the opposite direction from position X0.
FIG. 2 shows the images formed from each sub-array. Images 8, 9, and 10 are the images formed from sub-arrays 7, 6, and 5, respectively. Combining the images without correcting the linear shift parallax error yields an undesirable image in which the object 1 appears simultaneously in several, typically overlapping positions, as shown in image 19 of FIG. 7.
Thus, an efficient method for correcting linear shift parallax error in multi-array imagers is desirable.