An image sensor device is an integrated circuit (IC) having an array of pixels and circuitry for sampling the pixels and processing the pixel sample values. Pixel dimensions in image sensor devices are continually decreasing. At the same time, efforts are continually being made to increase the photodiode area of the pixels. One way to increase photodiode area is to share transistors that perform the same function amongst multiple pixels. Multiplexing devices and techniques are used in image sensor devices to allow these same-function transistors to be shared amongst multiple pixels. This pixel multiplexing makes it possible to increase full-well capacity, fill-factor, and sensitivity of the pixels, and thus to beneficially increase photodiode area. Pixel multiplexing also makes it possible to reduce the number of metal interconnect routes that are needed, which also allows photodiode area to be increased.
In image sensors that use pixel multiplexing, the pixels are spatially arranged in the image sensor such that an intrinsic spatial asymmetry exists between adjacent pixels. An example of a bayer block of a known image sensor device is shown in FIG. 1. A bayer block is a 2-by-2 group of pixels that are covered by green, red, blue, and green color filters (not shown) and together can be used to reassemble the red, green and blue components of the white light illuminating the image sensor device. The bayer block includes a green pixel 2, and red pixel 3, a blue pixel 4, and a green pixel 5. The reset (RST) and source follower (SF) transistors 6 and 7 are shared amongst the pixels 2-5, as is the floating diffusion node 8. Each of the pixels 2, 3, 4, and 5 has a transfer transistor 8, 9, 11, and 12, respectively. Thus, the bayer block shown in FIG. 1 has a total six transistors.
The horizontal routes 14 of the bayer block are formed in the lowest metal layer, the metal-1 layer. The vertical routes 15-18 are formed in the next layer above the metal-1 layer, metal layer 2. The vertical routes 15 and 16 are part of the network of conductors that provide power from the power supply, PVDD, to the pixels 2-5. The vertical route lines 17 and 18 are the even and odd bit columns, respectively. Multiplexing circuitry (not shown) is used to select (i.e., turn on) only one the transfer transistors 8, 9, 11 and 12 at any given time to sample the selected pixel.
While the pixels 2-5 have very good symmetry with regard to mirroring about a horizontal or vertical axis, the overlaying color filters follow translational symmetry, which produces an asymmetrical optical angular response for the combined structure of the bayer block. This asymmetrical optical angular response often results in color cross-talk between adjacent pixels, i.e., light of one color bleeding over into a pixel intended to receive light of a different color. This color cross-talk is problematic because it can lead to artifacts in the final output image produced by the image sensor device.
The manner in which color cross-talk occurs in the image sensor device shown in FIG. 1 can be in seen in FIG. 2. FIG. 2 illustrates a cross-sectional view of a portion of an image sensor device comprising two adjacent pixels 3 and 23, a color filter device 37 and a microlens structure 38. The pixel 3 on the left corresponds to the red pixel 3 shown in FIG. 1. The pixel 23 on the right of pixel 3 is an adjacent green pixel, which cannot be seen in FIG. 1. In the red pixel 3, the bottom layer 21 is the substrate, which is typically polysilicon, and the layer 22 above it is the photodiode layer that contains the photosensitive area 35 of the photodiode. The blocks 8 and 9 correspond to transfer transistors 8 and 9, respectively, shown in FIG. 1. Transfer transistor 9 is part of pixel 3, whereas transfer gate 8 is part of the green pixel 2 shown in FIG. 1, which is not shown in FIG. 2. The blocks 16, 17 and 18 correspond to vertical routes 16, 17 and 18, respectively, shown in FIG. 1, which are formed in the metal-2 layer. In the green pixel 23, the bottom layer 31 is the polysilicon substrate, and the layer 32 above it contains the photosensitive area 36 is the photodiode itself. The blocks 24 and 25 are transfer transistors. The transfer transistor 24 is part of the green pixel 23, whereas the transfer transistor 25 is part of the red pixel (not shown) to the right of green pixel 23. The blocks 26 and 27 are vertical routes formed in the metal-2 layer. Of course, layers 21 and 22 and layers 31 and 32 correspond, respectively, to the same layers.
The color filter device 37 and the microlens structure 38 are spatially arranged such light is received by them at angles that are non-normal with respect to the plane of the color filter device 37. The spatial arrangement is intended to match the principle ray bundle angle resulting from the off-axis locations of the pixels. The principle ray bundle is represented by arrows 41. Each ray bundle is represented by a red component 41A, a green component 41B and a blue component 41C, which together form white light. The portion of the color filter device 37 shown in FIG. 3 includes a red color filter 43 and a green color filter 44. The red color filter 43 passes only the red component 41A and filters out the green and blue components 41B and 41C. The green color filter 44 passes only the green component 41B and filters out the red and blue components 41A and 41C.
The color filter device 37 and microlens structure 38 are spatially arranged as shown to ensure that the red component 41A is only incident on the photosensitive area 35 of the red pixel 3 and the green component 41B is only incident of the photosensitive area 36 of the green pixel 23. However, because of the spatial asymmetry of the adjacent pixels 3 and 23, and the angle of the light, some of the red components 41A may be incident on, or bleed into, the photosensitive area 36 of the green pixel 23, thereby resulting in color cross-talk. The optical asymmetry of adjacent pixels that results from the spatial asymmetry of the pixels is often phrased as the pixel having an asymmetrical angular response. It is also possible, but less likely because of the angle of the light, that green components 41B will be incident on the photosensitive area 35 of the red pixel 3. The situation is reversed on the opposite edge of the imaging array, where it is more likely that some green components will bleed onto the photosensitive area of the red pixel than it is that some red components will bleed onto the photosensitive area of the green pixel.
Color cross-talk between adjacent pixels can produce artifacts in the output image of the imaging device in the form of color variations across the imaging array of pixels where there should be color uniformity. For example, when imaging a target of uniform color (in particular, of uniform hue), the asymmetrical angular responses of the pixels may result in the output image having a displeasing greenish hue on one edge of the image and purple-ish hue on the other edge of the image. Furthermore, the asymmetrical angular response of pixels is even more pronounced in pixels located farther away from optical center of the imaging array, which can result in cross-talk amongst pixels closer to the optical center being unequal to cross-talk amongst pixels farther from the optical center. This unequal cross-talk typically results in more pronounced hue artifacts in the image.
Accordingly, a need exists for a way to eliminate or reduce color cross-talk between adjacent pixels in image sensor devices.