FIG. 1 illustrates the geometry of a Bayer filter. Each filter element is referred to by a letter representing the color of the filter element where G indicates green, R indicates red and B indicates blue. These filter elements are disposed over the individual photo sensors in a sensor array and are utilized to create color images. The sensor elements in the array correspond to pixel elements in an image. In the Bayer filter each pixel element outputs a sensor value indicating the amount of light that passes through the filter element disposed above the sensor element.
The frequency of the filters in the Bayer filter array is 50% G, 25% R and 25% B. These frequencies are selected to mimic the greater sensitivity of the human eye to green light.
In the following, a particular pixel element, or the sensor value output by the pixel element, in the array of FIG. 1 is referred to by the color of the filter element and the row and column of its location. For example, the filter element in the upper left-hand corner is referred to as G(1,1). In order to create a color image the actual RGB values of the scene being imaged is required for each pixel element. To this end, for a given pixel element such as G(3,3) the R and B values must be found by interpolation of the R and B sensor values from neighboring pixel elements.
There are many ways of converting an RGB Bayer filter (as shown in FIG. 1) image to a full resolution RGB image (see for example U.S. Pat. Nos. 3,971,065, 4,642,678, 6,181,376, 7,053,908, 7,079,705).
Most of these existing methods interpolate the missing sensor values from neighboring sensor values of the same color plane, under the assumption that the sensor values of neighboring pixel elements are highly correlated in an image. However, for image regions with high spatial frequency components such as sharp lines and edges, the correlation among neighboring pixel elements may be poor. Therefore, interpolation solely depending on the correlation of pixel neighboring elements may generate color aliasing artifacts in regions containing fine details. In addition, neighboring correlation interpolation methods may generate images with independent noise levels among the color planes, resulting in higher noise amplification during color correction processing.
Other methods aim at preserving high spatial frequency components (sharp edges) at the expense of smooth transitions and/or noise suppression. For example, there are interpolation algorithms incorporating both the neighboring sensor values and the raw sensor value of the current pixel element when calculating the missing color values. Such algorithms operate under the assumption that different color sensor values of the same pixel element are usually highly correlated. The correlation among the different colors is assumed to either be fixed for all images or the same across a single image. Color correlation assumption and the associated interpolation methods can offer improved edge and line reconstruction with less chromatic aliasing.
However, in some images, the improved edge and line reconstruction comes at the cost of reduced color saturation due to assumptions of fixed positive correlation among different color planes. The method taught in U.S. Pat. No. 4,642,678 is different in that it interpolates the deltas of red/green and blue/green (referred to as chrominance in the patent). All of the methods that use single pixel elements to determine color or intensity are prone to noise artifacts.