1. Field of the Invention
The present invention relates to the field of imaging, the field of computer assisted imaging, the field of digital imaging, and the field of automatically controlled enhancement of specific attributes of digital imaging data such as contrast.
2. Background of the Invention
The proliferation of digital imaging means such as photography, scanning, copying, printing and digital cameras has resulted in a large volume of color imagery. Since none of the devices produce consistently perfect color, especially in the hands of unskilled amateurs, there is a need to correct the color of images. Color correction has been the object of much effort in the imaging art but there remains a need for simple correction methods that can be applied in an automated way.
One approach to correcting color is based on the idea that improper colors in the scene are the result of illumination that is not perfectly white. If the illuminant of the scene can be determined, then the colors of the scene can be corrected to their appearance under some standard reference illuminant. Many methods make the assumption that specular reflections in the image have a color corresponding to the illuminant so that the brightest pixels in the image can be used to recover the illuminant color. Examples of this approach include the following. U.S. Pat. No. 4,685,071 describes estimating the illuminant by determining the locus of intersection of lines fitted through sets of points of constant hue and varying saturation. U.S. Pat. No. 5,495,428 teaches a similar method in which the improvement involves weighting the lines according to their reliability. U.S. Pat. No. 5,825,916 involves a related fitting of lines to a smoothed chromaticity bitmap. U.S. Pat. No. 6,104,830 discloses a similar procedure in which the location of the lines is estimated by means of a Hough transform. In “Signal Processing by the Input Interface to a Digital Color Laser Copier” by A. Usami, SID 90 Digest, p. 498–500 (1990) the brightest non-white pixel is considered representative of the illuminant and is adjusted to a neutral color. However, these methods fail when detector saturation occurs since the brightest pixels then no longer represent the illuminant and, except for the Usami procedure, are computationally expensive.
An alternative approach to illuminant estimation is to examine the gamut of colors in an image. If certain colors are present in the image particular illuminants can be excluded. For example, since objects are usually colored by virtue of reflecting light, if a scene contains the color red then the illuminant must contain red and cannot, for instance, be blue. The gamut mapping procedure for illuminant recovery is described in “A novel algorithm for color constancy” by D. Forsyth, Int. J. Comput. Vision, 5, p. 5–36 (1990). A more efficient version in a chromaticity color space has been developed by G. Finlayson and S. Hordley, Proc. IEEE Conf. Comput. Vision Patt. Recogn., p. 60–65 (1998). European Pat. 0 862,336 teaches the use of the method in a digital camera. These methods are computationally intensive an do not uniquely identify the illuminant with additional assumptions. Moreover, digital images can be subject to arbitrary color manipulation so that color imbalance does not necessarily result from illuminant changes.
Yet another method of color correction is based on the gray world assumption introduced by Evans in U.S. Pat. No. 2,571,697. The method relies on the idea that in a complex natural scene such as typically occurs amateur photographs the average of all the colors is gray. Thus, by adjusting the mean color of the image to gray, color correction can be achieved. However, this method fails when the scene content does not, in fact, correspond to an average gray. This happens, for instance, when an object of a single color dominates the scene or in digital images, such as business graphics, which have a simple color distribution. There have been attempts to improve the method by applying it to individual luminance ranges in the image. Examples include U.S. Pat. No. 5,233,413, U.S. Pat. No. 5,357,352 and U.S. Pat. No. 5,420,704. Another variation in U.S. Pat. No. 5,926,291 seeks to use only colors of low and high brightness and also low saturation as the basis for correction. The same gray world assumption is used in the retinex family of algorithms as discussed in “Investigations into multiscale retinex”, K. Barnard and B. Funt, Color Imaging in Multimedia '98, p. 9–17, Derby, UK, March 1998. None of these methods are, however, fully satisfactory because of failure of the gray world assumption.
In order to improve color balancing performance some workers have taken advantage of the statistical distribution of image types submitted to the color correction system and have developed corrections tailored to certain common types of color defects. Examples include U.S. Pat. No. 4,339,517 and U.S. Pat. No. 6,097,836. However, such approaches are useless when the images to be processed do not fall into a few simple categories. Other correction methods attempt to capture the experience of imaging experts by developing rules for image correction based on examination of a very large number of images as exemplified by U.S. Pat. No. 5,694,484. In WO 97/01151 there is disclosed a color correction system that is taught color preferences through the use of neural networks. Such methods frequently fail when they encounter images not in the original training set. Moreover, the effort of developing such a method is very great because of the large number of images on which it is based and the resulting method is hard to understand and modify because of its complexity. In the case of neural networks, there is a danger of over-training, where correction of the training set improves at the expense of generality in the correction performance.
A range of empirical color correction methods also exist, which are based on statistical analysis of color histograms and sometimes also brightness histograms. Examples include U.S. Pat. No. 4,729,016, U.S. Pat. No. 4,984,071, U.S. Pat. No. 5,117,293, U.S. Pat. No. 5,323,241 and U.S. Pat. No. 6,151,410. Most of these methods place particular emphasis on the highlight and shadow regions of the histogram though some, such as U.S. Pat. No. 6,151,410, specifically exclude some of these regions on the grounds that the data are unreliable. These methods depend on the image data set used to derive the statistical analysis of the histogram and can be unsatisfactory for images of a type not in the original data set. At least some cause over-correction of the image when applied repeatedly and can result in information loss through clipping of the lowest and highest intensities in the image. These methods are, further, inherently incapable of correcting for different scene illuminants.
There are also color correction methods based on manually specifying a black or white point in the image or a color that should be considered as neutral gray. Such a capability is available as software in the “Balance to sample” feature of PhotoStyler 2.0 (Aldus Corporation, 411 First Avenue South, Seattle, Wash. 98104), in the “Curves” feature of Photoshop 5.5 (Adobe Systems Incorporated, 345 Park Avenue, San Jose, Calif. 95110–2704) and in the “Automatic” mode of the “Tint” feature in PhotoDraw 2000 (Microsoft Corporation, One Microsoft Way, Redmond, Wash. 98052-6399). Correction using manually specified highlight and shadow regions is disclosed in U.S. Pat. No. 5,062,058 and the falls within the claims of U.S. Pat. No. 5,487,020. These methods, however, require manual intervention and it can be difficult to select the optimal black and white points to achieve the desired correction. There have been attempts to automate this correction process. In 1996 Photoshop 4.0 introduced the “Auto Levels” feature that stretches the three color channel histograms to full range, by default clipping the top and bottom 0.5% of the channel values. A similar feature is available as “Auto Tonal Adjustment” in PhotoStyler 2.0. Additionally, U.S. Pat. No. 5,812,286 teaches such a method of correction. These methods have the disadvantage that part of the image information is lost through the clipping process and, further, image contrast is undesirably changed along with the correction of color. An attempt to solve this difficulty is disclosed in U.S. Pat. No. 5,371,615 wherein the RGB color triplets for each image pixel are examined to determine the blackest non-black pixel as min[max(R,G,B)] and the whitest non-white pixel as max[min(R,G,B)] ignoring exactly black and exactly white pixels. Subsequently a black point Wmin is specified as having all three color channels equal to min[max(R,G,B)] and a white point Wmax as having all three color channels equal to max[min(R,G,B)] and then each color channel value Xin is corrected to a new value Xout according to:Xout=(Wmax−Wmin)×(Xin−Xmin)/(Xmax−Xmin)+Wmin.
This procedure, however, has the disadvantage that the color correction can depend on as few as two pixels in the image. This renders the method susceptible to noise and to defective pixels in digital camera detectors. At the same time the method does not retain the contrast of the image, an effect that can be especially marked when the green channel does not participate in the definition of Wmin and Wmax.