Digital imaging devices such as digital cameras typically include an area-array imaging sensor such as a charge-coupled device (CCD) or CMOS sensor. Such a sensor has a color filter array that produces picture elements (“pixels”) for each of a set of color components (typically red, green, and blue). The amount of signal in each color component depends on the color balance of the scene. Color balance is determined not only by the colors of objects in the scene but also by the color of the illuminant. For example, pictures taken in the shade on a sunny day have a strong blue cast due to the blue sky illuminating the scene. Pictures taken indoors under incandescent lighting have a reddish or yellowish cast.
Once a digital imaging device has captured a raw image, the raw data from the sensor must be processed to produce a finished image. One part of this process is a “color balance” step that removes most or all of the effect of the illuminant on the colors in the image. This process mimics the way the brain compensates for the illuminant the eye sees. That is, humans see white as white independent of the color of the illuminant. Likewise, a digital imaging device post-processes images to ensure that white is reproduced as white independent of the illuminant. 
Unfortunately, strongly colored illuminants result in large imbalances in the three color components of the sensor. The exposure time is set based on the strongest color component to avoid saturating pixels of any color. The result is that the other two color components are much smaller than the dominant color component. The weaker colors can be amplified before, during, or after conversion from the analog to the digital domain. However, “gaining up” the weaker color components of the image amplifies the noise in those colors as well.
It is thus apparent that there is a need in the art for an improved method and apparatus for controlling color balance in a digital imaging device.