A typical electronic image sensor comprises a number of light sensitive picture elements (“pixels”) arranged in a two-dimensional array. Such an image sensor may be configured to produce a color image by forming a color filter array (CFA) over the pixels. One commonly used type of CFA pattern is the Bayer pattern, disclosed in U.S. Pat. No. 3,971,065, entitled “Color Imaging Array,” which is incorporated by reference herein. The Bayer CFA pattern provides each pixel with color photoresponse exhibiting a predominant sensitivity to one of three designated portions of the visible spectrum. The three designated portions may be, for example, red, green and blue, or cyan, magenta and yellow. A given CFA pattern is generally characterized by a minimal repeating unit in the form of a subarray of contiguous pixels that acts as a basic building block for the pattern. Multiple copies of the minimal repeating unit are juxtaposed to form the complete pattern.
An image captured using an image sensor with a Bayer CFA pattern has only one color value at each pixel. Therefore, in order to produce a full color image, the missing color values at each pixel are interpolated from the color values of nearby pixels. Numerous such interpolation techniques are known in the art. See, for example, U.S. Pat. No. 5,652,621, entitled “Adaptive Color Plane Interpolation in Single Sensor Color Electronic Camera,” which is incorporated by reference herein.
It is known to form a given image sensor as a so-called stacked image sensor. In a typical arrangement of this type, photodiodes or other photosensitive elements of the pixel array are formed in a first semiconductor die or layer, while associated circuitry for processing signals from the photosensitive elements is formed in a second semiconductor die or layer that underlies the first semiconductor die or layer. These first and second semiconductor die or layers are examples of what are more generally referred to herein as sensor and circuit wafers, respectively.
A problem that arises in conventional stacked image sensors relates to the manner in which the photosensitive elements in the sensor wafer are interconnected with the associated circuitry in the circuit wafer. The typical conventional approach generally calls for such interconnects to be formed on a per-pixel basis, that is, with a separate inter-wafer interconnect provided for each pixel. It is clear that such an approach can significantly increase the cost and complexity of the stacked image sensor. It can also have a negative impact on sensor performance.
Accordingly, a need exists for an improved stacked image sensor which overcomes the above-noted drawbacks of conventional practice.