This disclosure relates generally to the field of color balancing. More particularly, but not by way of limitation, it relates to techniques for improving the performance of auto white balance (AWB) algorithms by using noise-optimized selection criteria.
Color balancing may be thought of as the global adjustment of the intensities of the colors in an image. One goal of color balancing is to render specific colors, e.g., neutral white, as accurately as possible to the way the color appeared in the actual physical scene from which the image was captured. In the case of rendering neutral white colors correctly, the process is often referred to as “white balancing.” Most digital cameras base their color balancing and color correction decisions at least in part on the type of scene illuminant. For example, the color of a white sheet of paper will appear differently under fluorescent lighting than it will in direct sunlight. The type of color correction to be performed may be specified manually by a user of the digital camera who knows the scene illuminant for the captured image, or may be set programmatically using one or more of a variety of AWB algorithms.
The “white point” of a scene can be estimated by evaluating an image or images captured by a camera image sensor that has a known response to a set of known light sources. Camera response to illuminants can be characterized by the following equation:Cwhite=S*P  (Eqn. 1)where P stands for a set of spectral power distributions of the light sources, S is spectral sensitivity of the camera, and Cwhite is the response vector of the camera. In other words, the camera's response will be a function of both the particular type of light source as well as the particular spectral response of the camera.
In real world imaging, the camera's response is also a function of the light reflected from object surfaces in the scene. This relationship can be described as:Cobjects=S*R*P,  (Eqn. 2)where R stands for the spectral reflectance of object surfaces.
The fundamental problem that AWB algorithms deal with is attempting to resolve the scene light source white point from the captured image caused by the unknown light source (P) with the known response and camera sensitivity (S), and with unknown object surfaces in the scene (R).
A variety of different methods have been investigated in both academia and in industry to resolve the uncertainty in estimating scene white point from image data only. The most basic “gray world” AWB algorithm makes a strong assumption about object surface reflectance distribution in the real world, i.e., that the color of the entire scene will average out to gray, in order to constrain the solution. Other published methods include: a version of Bayesian estimation that makes a less strong and more principled modeling of surface reflectance and illuminant distribution to arrive at better estimates; a “color by correlation” algorithm that makes use of the unique distribution of image chromaticity under different illuminants for its illuminant estimation; and even a class of algorithms that derive white point values from specular or micro-specular reflectance information in the scene.
In industrial practice, however, the most prevalent white balance methods are still those based loosely on a modified gray world method, due to their ease of implementation and decent stability. There can be many variations of such an approach, but most involve first selecting a subset of the pixel responses that are likely to be from neutral surfaces illuminated by plausible light sources, and then making the assumption that the average chromaticity of such pixels is likely to represent the color of true white/gray in the scene. This class of methods will be referred to herein as “selective gray world” algorithms.
The biggest limitation with such a selective gray world method is the same one that makes the original gray world method unpractical, namely, the assumption that “likely gray” pixel responses actually do average out to gray. From modeling camera responses to typical object surface reflectances under common illuminants, it has been determined that this assumption is often violated. For instance, depending on the illuminant and surface distribution of each usage scenario, some “likely gray” pixel responses are more likely to be gray than others, i.e., some pixel responses carry more information about the true white point than others. A weighting scheme can be used to treat these pixel responses differently in order to improve white point estimation accuracy. Once the subset of “likely gray” pixels of the captured image are selected, the white point of the scene can be calculated as the weighted sum of these pixel values:r=sum(R*W)/sum(W);g=sum(G*W)/sum(W);  (Eqns. 3)b=sum(B*W)/sum(W),where W refers to weight vector, and R, G, B are pixel color vectors.
Only two channels need to be adjusted to get the image white balance, which are usually r and b channels:R′=(g/r)R;  (Eqns. 4)B′=(g/b)B; where R and R′ are red channel response before and after white balance adjustment, and B and B′ are blue channel response before and after white balance adjustment.
Accordingly, there is a need for techniques to provide more accurate white balancing in images using an improved “selective gray world” approach. By intelligently weighting the plausible neutral pixel values when calculating white balance gains, white points can be calculated more accurately.