The present invention relates to digital image processing, and more particularly to color reconstruction. Some embodiments can be used to process images obtained by digital photographic or video cameras.
FIG. 1 illustrates a digital camera. (FIG. 1 is not labeled “prior art” because it will also be used to illustrate some features of the present invention.) Optical system 110 causes an image of a scene 112 to appear on an electronic image-capturing device 114. Device 114 may be a charge coupled device (CCD), a CMOS (Complementary Metal Oxide Semiconductor) based device, or some other type. Device 114 contains an array of light sensors 120 (e.g. photodiodes). Each sensor generates an analog electrical signal indicative of the number of photons registered by the sensor.
The analog signals are converted to digital signals by an A/D converter (not shown). The digital signals are processed by a circuit 118 as desired. Circuit 118 may also perform storage of the digital information which is carried by these signals or derived from these signals.
An image can be described as a collection of picture elements (pixels). Each pixel corresponds to a sensor 120. Therefore, the word “pixel” will be used herein to denote both an image element and a sensor.
Some digital image processing devices are designed to work with colors represented as a combination of individual color components, e.g. red, green and blue. In order to obtain individual color components, each sensor 120 is covered with a color filter to detect a single component, as illustrated in FIG. 2. The sensors are arranged in a “color filter array”, or CFA. Each sensor detects only the red (R), blue (B) or green (G) component. The particular pattern of the red, blue and green filters in FIG. 2 is known as the Bayer or GRGB pattern and described in U.S. Pat. No. 3,971,065 issued Jul. 20, 1976 to Bayer. The Bayer pattern is obtained by repeating the block
G RBGThe green filters are positioned along every other diagonal. The remaining diagonals contain alternating red and blue filters. The human eye is more sensitive to the green component than to the red and blue components, which is one reason why the green filters occur twice as often as the red and blue filters. Other filter arrays are also known. For example, CYGM (Cyan, Yellow, Green, Magenta) has been used in some photographic cameras.
At each pixel, the color components not measured by the sensor are reconstructed by circuit 118. A component can be reconstructed linearly by taking the component's average at adjacent pixels. For example, the green value at pixel 120.1 can be obtained as the average of the green values at pixels 120.2, 120.3, 120.4, 120.5. Alternatively, the reconstruction can be performed using higher order polynomials (e.g. cubic polynomials). See U.S. Pat. No. 6,320,593 issued Nov. 20, 2001 to Sobel et al.
If a component changes smoothly from pixel to pixel, the linear and higher order polynomial reconstruction techniques yield good results. However, in regions of abrupt color transition the polynomial reconstruction is not adequate. This is illustrated in FIG. 3 showing an image projected on a Bayer color filter array. For each pixel, the corresponding intensity is shown in a subscript. For example, R100 indicates a Red pixel (a pixel sensing the red component); the red light intensity sensed at this pixel is 100. Higher numbers indicate higher intensities. The image has yellow areas 180, 190. Yellow is obtained as a combination of red and green. The blue intensity B=0 throughout. In area 180, R=G=100. In area 190, R=G=10, so the area 180 is brighter. If circuit 118 uses bilinear reconstruction, the green intensity at pixel 120.1 will be constructed as the average of the green intensities at pixels 120.2, 120.3, 120.4, 120.5, so the green intensity at pixel 120.1 will be:(100+100+10+10)/4=55  (1)
This intensity value is higher than the red intensity, so the reconstructed image will be green at pixel 120.1.
U.S. Pat. No. 5,373,322 issued Dec. 13, 1994 to Laroche et al. describes the following technique. The horizontal and vertical gradients HDiff and VDiff are computed near the pixel 120.1, where:HDiff=|(R120.7+R120.6)/2−R120.1|VDiff=|(R120.8+R120.9)/2−R120.1|
If HDiff<VDiff, the image area near pixel 120.1 is presumed to represent predominately horizontal scene structure, so the green intensity at pixel 120.1 is interpolated using the green intensities in the same row:G=(G120.5+G120.3)/2
If VDiff<HDiff, the image area is presumed to represent predominately vertical scene structure, so the green intensity at pixel 120.1 is interpolated using the green intensities in the same column:G=(G120.2+G120.4)/2
If however HDiff=VDiff (as in FIG. 3), the green value at pixel 120.1 is interpolated as the average of all the adjacent green intensities, as in equation (1).