Images acquired by either film or digital cameras are enhanced through processing of the image color using a number of complex algorithms. The goal of these algorithms is to render an image reproduction that is pleasing and has the same color appearance as the objects in the original scene. The performance of all of these algorithms is substantially improved when the physical properties of objects and/or illuminants in the scene are known.
For example, color balancing is one important image processing algorithm. Color balancing refers to the act of adjusting the image colors to correct for distortions in color appearance when the image is acquired under one illuminant but rendered under a second, different illuminant. Say an image of a scene is captured indoors under a tungsten ambient illuminant. The unprocessed image will have a yellowish appearance (color cast) when viewed under natural outdoor ambient illumination. The performance of color balancing algorithms can be improved when the ambient illuminant of the scene is known.
Because of its importance, there has been a great deal of academic and industrial research on illuminant estimation. State-of-the-art ambient illuminant estimation algorithms include gray-world, specular reflections, physical-realizability, color-by-correlation, and Bayesian color constancy algorithms. All of these algorithms work in a passive mode: the algorithms estimate the ambient illuminant using light collected passively by film or a digital image sensor. In passive mode algorithms, the collected and analyzed light originates from the ambient illuminant and is already being collected for imaging purposes. For example, U.S. Pat. No. 6,069,972, issued to Durg et al., discloses a method for white balancing a digital color image. Using the captured image, color components of the pixels are analyzed to determine a global white point and perform color balancing on the entire image.
The most widely known and implemented of these passive mode algorithms is the gray-world algorithm, described in G. Buchsbaum, “A Spatial Processor Model for Object Color Perception,” J. Franklin Institute, 310, 1-26 (1980); and R. W. G. Hunt, The Reproduction of Color, 5th ed, Fountain Press, England (1996). The gray-world algorithm assumes that the average surface reflectance of objects in a scene corresponds to a gray surface. Based on this assumption, the algorithm uses the average color of an image as a measure of the ambient illumination.
Color-by-correlation is a more recent and novel passive mode illuminant estimation algorithm, described in G. D. Finlayson, P. M. Hubel, and S. Hordley, “Color by Correlation,”Proceedings of the IS&T/SID 5th Color Imaging Conference: Color Science, Systems, and Applications, Scottsdale, Ariz., 6-11 (1997). The method assumes that the number of possible ambient illuminants encountered is quite small. The algorithm tests which of the possible illuminants is the most likely one given the image data. Color-by-correlation does this test by comparing the chromaticity gamut of the image with the chromaticity gamut of each assumed illuminant multiplied by a database of natural surface reflectance functions. The algorithm simply picks the ambient illuminant whose gamut most overlaps the image gamut.
In another interesting algorithm, disclosed in U.S. Pat. No. 5,548,398, issued to Gaboury, a temporal sensor is included to detect the flicker frequency of the passive illumination. Steady illuminants are likely to be from natural sources, such as the sun, whereas artificial illuminants, such as fluorescent lights, flicker at known frequencies (typically 60 or 120 Hz).
By detecting this temporal frequency, the system can make an improved guess at the likely illuminant type and color.
Passive mode algorithms use image data that depend simultaneously on the ambient illuminant and the object surface reflectance functions. In order to derive an estimate of the ambient illuminant from image data, the algorithms must make assumptions about the properties of the object surface reflectance functions. There is no way to verify that these assumptions are true.
Active imaging methods (AIMs) differ from passive algorithms: they emit a signal into a scene. An example of an AIM system is a sonar range finder used for auto-focusing. The time-of-flight for the signal to leave the camera and return is measured and used to specify the distance to an object in the scene. This distance is used to set the camera focus. For more information on auto-focus algorithms, see G. Ligthart and F.C.A. Groen, “A Comparison of Different Autofocus Algorithms,” Proc. of IEEE Int. Conf on Pattern Recognition (1982). Range scanning systems emit laser pulses or other signals into a scene to determine the distance and shape of three-dimensional objects; for example, see P. Besl, “Active, Optical Range Imaging Sensors” Machine Vision and Applications, 1, 127-152 (1988).
Active imaging methods have not been applied to estimating physical properties relating to color. All existing color balancing methods are passive and therefore require estimation of physical properties that cannot be confirmed by measurement, thereby limiting the accuracy of the methods.