1. Field of the Invention
The present invention relates generally to electronic imaging sensors, and more particularly, to color filter arrays (CFAs) formed on the sensor to generate color images.
2. Description of the Related Art
Color cameras typically use charge-coupled-devices (CCD) or CMOS Image Sensors (CIS) to capture still images and generate live or recorded video. Although it is highly desirable to provide full color depth at each picture element (i.e., pixel) to minimize optical complexity and avoid problems with image registration in the spatial or temporal domains, cost and performance considerations often force each pixel to instead process a specific principal color. Full color images are subsequently formed by appropriately processing the color information from the entire color matrix. Various performance tradeoffs are encountered with such color matrices because at least three distinct types of color information must be efficiently extracted in order to accurately represent a wide color gamut in video signal form. Camera and sensor designers hence constantly trade signal-to-noise ratio for spatial or color resolution and vice versa.
An early approach to produce color from a single sensing device used a single image sensor of broad wavelength sensitivity in conjunction with a spinning filter disc. The disc successively passed a series of color filters through the image beam in a repeating sequence to produce a color image by composing several fields. Devices operating in this manner produce a “field sequential” color signal. A major problem with this approach is that the resulting signal presents the extracted color image information in a time order that is radically different from, for instance, standard NTSC or high definition ATSC video signals. Further, color information is temporally displaced for each field and, especially, for each composite color frame. Finally, some of the color information (e.g., the blue image frame when a basic blue color filter is used) tends to be disproportionately detailed and hence wastes available bandwidth considering the response characteristics of the human eye.
Other schemes produce color images using striped color filters superimposed on a single image sensor. One such image sensor uses filter grids that are angularly to superimposed on one another (see U.S. Pat. No. 3,378,633). Such image sensors produce a composite signal wherein chrominance information is represented in the form of modulated carrier signals as a result of image scanning. Such apparatus may be adapted to produce signals in the NTSC format or, if desired, the color image information can be separated by frequency domain techniques.
Striped filters which transmit a repeating sequence of three or more spectral bands have also been used in color imaging. The filters are typically aligned in one direction and the image is then scanned orthogonally to that direction. In effect, elemental sample areas are defined along the filter stripes so that sampling for a given color is not uniform for both directions. Additionally, the resulting sampling patterns once again tend to provide a disproportionate quantity of information regarding basic color vectors to which the dye has less resolving power, e.g., less important “blue” or “red” information relative to more important “green” information.
Another approach to color imaging is the “dot” scanning system, as taught by Banning in U.S. Pat. No. 2,683,769. This approach generally uses spectrally selective sensor elements arranged in triads of red, green and blue elements. In U.S. Pat. No. 2,755,334, also to Banning, a repeated arrangement of four element groupings (red-, green-, blue-, and white-sensitive elements, respectively) is alternatively described. Such approaches to color imaging have not been of practical significance until now, however, in part because of the higher cost of fabricating the large number of individual elements required to produce adequate image detail. Nevertheless, this approach is now being used to produce ultra-large displays via triads of light emitting diodes. A key advantage is that the color space is equally sampled. The resulting 4:4:4 color space precludes generation of objectionable image artifacts.
Another approach is disclosed in U.S. Pat. No. 3,971,065, invented by B. E. Bayer. In the Bayer CFA, color images are produced by a single imaging array composed of individual luminance and chrominance sensing elements that are distributed in repeating interlaid patterns wherein the luminance pattern occurs at the highest frequency of occurrence—and therefore the highest frequency of image sampling—irrespective of direction across the array.
Preferably, to produce an element array according to the Bayer approach, a solid state sensor array of broad wavelength sensitivity is provided with a superposed filter array. Filters which are selectively transparent in the green region of the visible spectrum constitute luminance-type elements, and filters selectively transparent in the red and blue spectral regions, respectively, constitute chrominance-type elements. (The term “luminance” is herein used in a broad sense to refer to the color vector which is the major contributor of luminance information. The term “chrominance” refers to those color vectors other than the luminance color vectors which provide a basis for defining an image.)
Various types of Bayer CFAs have since been developed. An effective method for re-optimizing spatial resolution involves migrating from standard rectangular patterns on an orthogonal grid to alternative configurations wherein groups of pixels are shifted horizontally and/or vertically within the overall matrix. For example, Ochi in U.S. Pat. No. 4,558,365 improves resolution and suppresses the generation of moire interference patterns by fitting octagonal photosensitive elements into a closely-packed array that also increases the photo-sensing area disposed at each pixel. Conversely, this reduces the maximum number of pixels for the available array area. Therefore, there is additional cost of fabricating a specific to number of individual elements to provide sufficient image information and produce adequate detail.
Sekine in U.S. Pat. No. 4,602,289 likewise revises the spatial sampling by similarly implementing a half-pixel shift in both horizontal and vertical directions. While such a sensor would otherwise normally be disposed on a standard rectangular grid, Sekine shifts a second group of pixels by 45° relative to a first group comprising the odd-numbered columns in the matrix sensor. The gaps between obliquely adjacent photodetector sites are filled with vertical registers for signal readout.
Yamada in U.S. Pat. No. 6,236,434 similarly shifts photosensor rows and columns relative to each other to increase overall spatial sampling. Likewise, serpentine charge transfer devices are squeezed in between the obliquely adjacent photosensors to achieve signal readout. Since each photosensor is physically reduced in size, however, photosensor “fill factor” and dynamic range are once again smaller than could otherwise be achieved.
The spatial sampling enabled by U.S. Pat. Nos. 4,558,365, 4,602,289 and 6,236,434 can be effective for objects predominantly disposed in a diagonal orientation, such as tree branches, mountain ridges, pyramids, etc. On the contrary, buildings, telephone poles and other man-made or natural objects are alternately singly and multiply sampled along a column or a row of the sensor. Takemura in U.S. Pat. No. 5,099,317 thus generally improves the efficacy of diagonalized spatial sampling by using multiple sensors in conjunction with a beam-splitting prism. Rather than performing the diagonal shift within a sensor, Takemura diagonally shifts two (or more) sensors relative to each other to produce multiple samples at each position in the spatial domain. Effective resolution is thus enhanced since the multiple sensors interject additional spatial samples at the corners of each normally-located pixel. This construction also increases sensitivity by using multiple sensors to produce each full color sample. Locating the second array of sensors 45 degrees with respect to the horizontal and 45 degrees with respect to the vertical also increases vertical and horizontal resolution in addition to the diagonal resolution at cost of reducing total pixel count.
Most recently, Sony Corp. has developed ClearVid™ CMOS using the diagonalization previously described in video cameras using one or multiple sensors. The modification with respect to the prior art is reducing the density of red and blue pixels to about ⅙th of the green pixels to further boost luminance S/N ratio. However; the non-standard CFA configuration requires custom signal processing electronics, to recompose the raw color information into a suitable format; the coarse blue and red pixel distribution supplied by the so-called “de-mosaicing” electronics degrades chrominance resolution to a level below generally accepted standards.
Since the Bayer CFA is now sufficiently ubiquitous so that de-mosaicing of various pixel shapes from rectangular through hexagonal and diamond (U.S. Pat. No. 6,522,356) is supported by standard signal processing electronics, it is greatly advantageous to revise a Bayer-like CFA to address specific shortcomings, rather than to pursue custom alternatives. The CFA pattern should also support extant standards for luminance and chrominance resolution.