Market growth for consumer electronic imaging products, such as digital-cameras, for example, is increasingly driving the development of low-cost color video cameras. A major cost of these cameras is the image sensor chip, whose cost is independent of the sensor area. Two major goals of sensor designers are to improve sensitivity by increasing pixel area, and improve resolution by increasing pixel count. However both of these goals increase sensor area, and hence, sensor cost. If sensor area is to remain constant, then conventional approaches to these goals are mutually exclusive. Image resolution can only be increased at the cost of lowered sensitivity, and sensitivity can only be increased at the cost of lowered resolution.
All color cameras spectrally filter the received image, which is typically within three bands of the visible spectrum. These bands correspond to the primary colors red, green, and blue. Sometimes these bands correspond to the complementary colors cyan, magenta, and yellow. Further color sets are also possible.
There are two principal forms of color camera construction. In the first form the image is split or replicated to form multiple images, either concurrently on multiple sensors or sequentially on a single sensor. Each of these multiple images, or color channels, is separately filtered to produce one spectral component of the defined composite color image. Each of the multiple images represents an optimal spatially-sampled version of the original scene.
In the second form, a single image is formed on a single sensor device. This device is spatially divided into many (typically 100,000 or more) pixels and covered with a color filter array (CFA) comprising a mixture of individual color filters. Each color filter corresponds to one of the desired spectral components, and is arranged in such a manner as to provide concurrent but sub-sampled color channels. Subsequently, interpolation techniques must be employed to restore the missing data in each color channel.
While the split-image form gives superior results at the expense of increased camera cost and size, most consumer cameras use the single-sensor CFA form. The sensor in these cases may typically be a charge coupled device (CCD) or CMOS array sensor. The production process for both types of sensors is well disposed to deposition of the color filters when the products are in silicon-wafer form. This enhances the cost benefit.
Many different CFA patterns are in use, including vertical stripes of single colors as well as fully 2-dimensional CFA patterns. FIG. 1 illustrates three example patterns. FIG. 1(a) is a vertical stripe (RGB) pattern. FIG. 1(b) is a matrix of complementary colors which is in common use. FIG. 1(c) is an RGB CFA pattern disclosed in U.S. Pat. No. 3,971,065. The pattern of FIG. 1(c) contains more green pixels than red or blue because green is the principal primary color component of luminance. As disclosed in U.S. Pat. No. 4,176,373, this pattern may be used with restoration techniques to produce an RGB triplet on every pixel site. This increases effective pixel size and hence sensitivity by a factor of three at no cost in sensor area.
Once the pattern is decided, it is necessary to develop techniques and circuits for combining the results of individual pixels into a composite color image representation. The choice of a color filter pattern and of the subsequent restoration process together determine the overall performance of the color camera. In this context, performance refers to both perceived resolution and color quality.