(1) Field of the Invention
This invention relates generally to color image processing and relates more particularly to a method to identify automatically white balance parameters with a good clustering for different illuminants.
(2) Description of the Prior Art
The human eye adapts to the color temperature of the illumination to some extent and thus gives us similar looking images in different lights. This is not the case with image sensors used in digital cameras. Instead of an adaptation a white balance operation has to be performed. The white balance operation is simply a scaling of the color channels to assure that a grey object remains grey even if the color of the illumination changes. To do this properly a grey object is needed in the scene for calibration, but this is usually not available and there is a need for automatic routines that can deal with this scaling based on the content of any image.
The main difficulty is to extract the relevant information from the current image and not to confuse object color with illumination color. Especially difficult are situations where the colored object, for example a blue sky, covers the whole picture.
The color-filter-array (CFA) of a color imager follows often the commonly used “Bayer” pattern. The pattern of said Bayer CFA is showed in FIG. 1 prior art. Half of the total number of pixels is green (G), while a quarter of the total number is assigned to both red (R) and blue (B).
In order to obtain this color information, the color image pixels are covered with a red, a green, or a blue filter, in a repeating pattern. This pattern, or sequence, of filters can vary, but the widely adopted “Bayer” pattern, which was invented at Kodak, is a repeated 2×2 arrangement as shown in FIG. 1 prior art.
This pattern, showed in FIG. 1 prior art, is based on the premise that the human eye derives most of the luminance data from the green content of a scene; and it is the resolution of this luminance data that is perceived as the “resolution” of an image. Therefore, by ensuring that more of the pixels are “green”, a higher resolution image can be created—compared with an alternating R-G-B color filter array with equal numbers of Red, Green and Blue pixels
Depending on the “color temperature” of a light source, a white object may generate different values for its R, G and B pixel values. For example, when the camera is pointed at a uniformly diffused white object that fills the entire field of view, the resulting R, G and B values may form the following matrix as shown in FIG. 2A prior art. Using a fluorescent light source the values for Red 1, Green 2 and Blue 3 areR=110G=300andB=200.
FIG. 2B prior art shows how the values of R and B are changing when incandescent light is used. The new values for Red 1, Green 2 and Blue 3 areR=200G=300andB=110.
Both cases require correction because a white object should have equal R, G and B data values. The simplest correction would involve “equalizing” the data—if the Green pixel values are kept unchanged and the Red and Blue pixel values are multiplied by appropriate “gain” coefficients.
In the case of the “fluorescent light” example, said gain coefficient Rg (or Red gain) should be 300/110=2.7 and Bg (or Blue Gain) should be 300/200=1.5. In the case of “incandescent light” example, Rg should be 300/200=1.5 and Bg should be 30/110=2.7.
As shown in the above examples, the Rg and Bg coefficients depend on the type or the color temperature of the illumination that is used. Therefore a “white balance” operation is required each time the illumination changes.
A common method in prior art is outlined in FIG. 3 prior art. Step 31 describes that the software of the camera instructs the user to point the camera at a uniform white object e.g. a sheet of white paper. Step 32 illustrates that the button of the camera “white balance now” has to be pressed. In the next step 33 the software examines the ratios G/R and G/B and determines the average values of Rg and Bg. In the last step 34 the software stores the computed average Rg and Bg values and uses the coefficients to generate color corrected Red and Blue pixel values.
U.S. Pat. No. (5,917,556 to Katayama) describes a method for correcting a colour video signal for white balance comprising the steps of: providing a digital colour image signal having a sequence of mono-color pixels, wherein each of the pixels represents one colour of first, second, and third different colours, which are repeated in a pre-set pattern; coarse white balancing processing each monocolor pixel for a coarse white balance; spatial processing the sequence of mono-color pixels to produce a sequence of tricolour pixels, wherein each of the tricolour pixels represents the first, second, and third different colours; and colour correction and fine white balancing processing each of the tricolour pixels.
U.S. Pat. No. (5,995,142 to Matsufune) discloses an imaging device including an imaging device for converting an image into a plurality of colour signals each having a signal level; a white balance amplifier for adjusting the signal level of at least one colour signal to produce a plurality of amplified colour signals; a calibration device for calibrating the white balance amplifier and for producing at least one calibration parameter; a detecting device for detecting the amplified colour signals; a calculation device for calculating at least one white balance amplification adjustment; a comparing device for comparing at least one white balance calibration parameter; and an automatic adjustment device for automatically adjusting the white balance amplifier.
U.S. Pat. No. (6,201,530 to Thadani et al.) shows a method for displaying an image to a display device. This method includes sequentially the steps of receiving the image in a first data format, performing a white balance correction in combination with a gamma correction for the image in the first data format, performing a colour correction in combination with a colour space conversion to a second data format for the image, and displaying the image in the second data format to the display device.