Adaptation can be considered as a dynamic mechanism of the human visual system to optimize the visual response to a particular viewing condition. Dark and light adaptation are the changes in visual sensitivity when the level of illumination is decreased or increased, respectively. Human chromatic adaptation is the ability of the human visual system to compensate for the color of the illumination and to approximately preserve the appearance of an object. For example, chromatic adaptation can be observed by examining a white object under different types of illumination, such as daylight and incandescent. Daylight is “bluer”: it contains far more short-wavelength energy than incandescent. However, the white object retains its white appearance under both light sources, as long as the viewer is adapted to the light source.
Image capturing systems such as scanners, digital cameras, digital camcorders or other devices, unlike human beings, do not have the ability to adapt to an illumination source. Scanners usually have fluorescent light sources. Illumination sources captured by digital cameras or camcorders, for example, typically vary according to the scene, and often within the scene. Additionally, images captured with these devices are viewed using a wide variety of light sources. To faithfully reproduce the appearance of image colors, it would be beneficial to compensate for the “cast” of the illuminant (i.e., the color contribution of the illuminant to the captured image) from captured images. Generally, the process of correcting for illuminant casts is termed “white balancing”, referring to the desire to ensure that white objects in an image do in fact appear white, or as they would be seen under daylight conditions.
Various techniques exist in the art for performing white balancing. Ideally, such techniques should offer good accuracy and computational efficiency. Furthermore, it would be particularly advantageous to be able to perform white balancing “on the fly”, i.e., without the need to store the full image prior to processing. However, current techniques tend to trade off performance for computational efficiency and vice versa. For example, U.S. Pat. No. 6,798,449 issued to Hsieh teaches a system in which image data in YCrCb format is operated upon to determine mean values for Cr and Cb over various regions within the image. Using Cr and Cb as coordinate axes, the mean values for each region are used as coordinates to determine, for each region or the image as a whole, which quadrants within the two-dimensional Cr-Cb chart include the coordinate data, thereby indicating illuminant cast contributions for each region or the image as a whole. Based upon the quadrant indicated for a given region or the entire image, gain adjustments are applied to red and blue. While this approach is relatively easy to implement, it offers a relatively simplistic treatment to the problem and its performance is limited because it operates in a color space (YCrCb) that has only a poor relation to perceived color space. On the other hand, U.S. Pat. No. 6,573,932 issued to Adams, Jr. et al. teaches a complex, computationally-intensive, iterative technique and, unlike Hsieh, performs color correction in a color space related to human perceptual capabilities. However, the color space used by Adams, Jr. et al. is a non-linear and thus not suited for a hardware implementation. Furthermore, the iterative nature of the technique requires storage of at least parts of the image.
Accordingly, it would be advantageous to provide a technique for white balancing that offers both good performance and relatively inexpensive computational complexity, which would relate to a simple hardware implementation, as well as the ability to be performed “on the fly”, i.e., without the need to store the image.