The present invention relates to a method for detecting the illumination color to store the color-compensated image, considering the effect of the illumination environment on photographing in the color image input apparatus, and particularly for detecting the accurate illumination color derived from the image from the color image input apparatus.
Up to now many methods are well known for performing compensation of the color such as a white balance in a color image input apparatus such as a digital camera.
It is essential to detect the illumination color or the Spectrum Power Distribution (SPD) of illumination as the illumination component in circumstances such as robot vision systems made to recognize and determine objects under various circumstances of light. It is necessary because the color of the object looks different according to the light.
An accurate method is required for detecting the components of light so as to find the accurate color compensation or to find the results of recognition derived from the image from the image input apparatus.
The methods for performing white balance or color compensation by the prior art include:
a method of performing white balance referring to illumination color found in general in the photographing of a white plate (or paper) before photographing, in the case of a video camera;
a method of using a camera device with a built-in sensor for detecting the components of illumination emitted from the direct light;
a method of detecting the components of illumination by using a camera having particular hardware, the way to determine the light being based on pushing a button placed in the camera corresponding to the specific light determined by users; and
a method of finding the illumination color from the image itself.
But when a camera device with a built-in sensor as noted above is used, there are problems of application to the image derived from long-distance photographing where the camera is unable to deal with the problems by the hardware directly. In addition, there is a burden of additional expense due to the attachment of additional hardware. Also there exists a problem of the need for many buttons to correspond to the various components of light in the buttons determined by user.
U.S. Pat. No. 4,685,071 discloses a method of finding the illumination color from the image itself as a solution to resolve problems in the previous methods. The object of the Lee process is to detect the illumination color directly from the color image signal. The Lee process has the advantage of a reduction in the manufacturing expenses of the image input apparatus because it is not necessary to add any hardware devices, such as a detector for the illumination color. In the Lee process the pixel is provided as a color boundary pixel, of which the color varies more than the specific magnitude based on the level of color variation at the front of image. It approximates linearly on chromaticity coordinates from the selection of the set of pixels on either side of each pixel of a color boundary, and determines if the relevant pixel represents the boundary of color derived from a body color of object and a highlight. The illumination color is determined based on the Hough conversion and the linear approximation of the set of pixels around the pixels of a color boundary derived from a color of a body in an object and a highlight. Namely the illumination color is determined by detecting the specularly reflected light from the scene in the image.
The specularly reflected light can be detected by detecting the set of points representing the same hue and the variable saturation on each object with different surface color in the image.
For the purpose of detecting the independent variation of color on brightness, the image is converted to a color space with a chromaticity coordinate. The color boundary is detected for which chromaticity and hue vary the most steeply, and the set of data around that color boundary by the variation of chromaticity are used.
The chromaticity coordinate is represented with a 2-dimensional coordinate system for each r and g converted from a 3-dimensional coordinate system for each R, G and B, and generally used in the field of chromatics.
In this case, for the purpose of distinguishing if a boundary was made by the variation of chromaticity or by the variation of hue, it approximates linearly by collecting the set of data on both sides of a boundary point. The boundary is determined to be made by chromaticity when the slopes of the line derived from the sets of data collected from both sides are the same as each other, and the data set becomes a data set for detecting the illumination color. It is then possible to find a variable determining the illumination color from the path of intersection points derived across the lines from many sets of data around the boundary point by the variation of chromaticity.
But the big problem of method stated above is that it demands excessive running time for collection of the set of data and for approximation.
The problems of the method stated above are that it is difficult to collect data of both sides from the data of a boundary point, and it should repeat actions of collecting the set of data at a great many boundary points by means of processing data by the unit of boundary point and actions of determining as well as comparing those with each other by means of approximation to a straight line. That is to say, it takes a lot of running time to process the edge points by units of pixels.
The other prior art is disclosed in U.S. Pat. No. 5,023,704. This prior art is a device for detecting the color-temperature of illumination with a built-in sensor for detecting multilevel components of color involved in the incident light. It is easy to correspond to the variations of illumination colors around the sensor. However, the accuracy of the color-temperature of detected illumination decreases if the photographed scene becomes more distant from the input apparatus. This prior art is comprised of a detector detecting multilevel color components involved in the incident light, a comparator comparing multiple reference levels with each color component, and a controller modifying the reference level according to detected illumination level. This prior art has a problem in that illumination colors from an image can be detected only if a detector as noted above exists.
The invention is intended to solve the above described problems, and one object of the present invention is to provide a method for getting faster processing and more exact illumination color comprising the steps of:
finding color transition bands from the whole image,
performing linear approximation from chromaticity coordinates of the pixels involved in the color transition bands, and
determining the average of intersections of the straight lines and setting the average as the illumination color.
To achieve the above object, the invention provides a method to detect illumination color from a color image signal comprising steps of:
inputting the color image signal from an image input apparatus;
removing saturated signals, signals with low brightness, and signals with high chroma from the color image signal, i.e., setting their values to 0;
performing multistage median filtering to remove noise using maintenance of the edge information from images;
transforming color signals of R, G, and B of all pixels of the input color image signal into the chromaticity coordinates;
calculating the magnitude of the gradient to get color edges from chromaticity coordinates of images;
determining color transition bands (CTB) as approximated lines;
calculating eigen values and eigen vectors of each color transition band;
determining if the ratio of a bigger eigen value (BEV) to the total variance (RBEV) is larger than a first predetermined limit condition;
determining if the absolute magnitude of BEV (AMBEV) is larger than a second predetermined limit condition;
finding the linear approximation equation of the color transition band using the eigen vector corresponding to the BEV;
confirming if all color transition bands have been processed;
calculating all intersection points between each other pair of two lines of the approximated lines; and
determining the average chromaticity coordinates of the intersection points of each pair of two lines for which the product of the slopes is negative as the illumination color.
In the process according to the present invention, it is preferable that a signal with low brightness is the signal where brightness (Y) less than 50 and Y=(R+G+B)/3, and the signal with high chroma is the color signal outside the 0.6 PCC triangle on the r-g chromaticity diagram, and a saturated signal is the signal of which the value for any one of R, G, and B elements is 255. It is preferable that a PCC triangle defined as   PCC  =                              CA          r                _                              CM          r                _              =                                        CA            g                    _                                      CM            g                    _                    =                                    CA            o                    _                                      CM            o                    _                    
is the triangle which links Ar, Ag, and Ao points determined according to the magnitude of a specific PCC, where a central point on the r-g chromaticity diagram is C(0.333, 0.333), the maximum points of the r-g coordinates are Mr(1,0) and Mg(0,1), the origin is Mo(0,0), and the segment linking those points and each point on the segment is defined as follows:
{overscore (CMr)} is the segment linking the C and the Mr,
{overscore (CMg)} is the segment linking the C and the Mg,
{overscore (CMo)} is the segment linking the C and the Mo,
{overscore (CAr)} is the segment linking the Ar and C for any point Ar on the segment linking C and Mr,
{overscore (CAg)} is the segment linking the Ag and C for any point Ag on the segment linking C and Mg, and
{overscore (CAo)} is the segment linking the Ao and C for any point Ao on the segment linking C and Mo.
According to one embodiment of the present invention the process is comprised of following steps:
finding the magnitudes of all of the gradients of pixels in the input image;
normalizing the magnitude of the gradient;
quantizing the magnitude of the normalized gradient; and
making the image of the gradient as a magnitude of the quantized gradient.
According to one embodiment of the present invention, it is preferable that the equation is given as MoG(x,y)=[er(x,y)+eg(x,y)]/2 to get the gradient for all of the pixels, where
the gradient is ∇f(x,y)=[∂f/∂x ∂f/∂y]Txe2x89xa1[fxfy]T for any position (x,y) in the image,
the magnitude of the gradient is e(x, y)={square root over (fx2(x, y)+fy2(x, y))}, and
the magnitude of the gradient is er(x,y) or eg(x,y) for each r, g respectively.
According to one embodiment of the present invention, it is preferable that the equation to normalize magnitudes of the gradients is given as RMG(x,y)=MoG(x,y)|MaxMoG, where MaxMoG is the maximum multitude gradient in the overall image, and MoG(x,y) is the gradient for all pixels.
According to one embodiment of the present invention, it is preferable that the equation to quantize the magnitude of the normalized gradient is given as QMoG(x,y)=Quantization[NRMG(x,y)], 0xe2x89xa6QMoG(x,y)xe2x89xa6MaxQ, where the level of maximum quantization is MaxQ, and the magnitude of the normalized gradient is NRMG(x,y)=RMG(x,y)xc3x97MaxQ.
According to one embodiment of the present invention, it is preferable that the process is comprised of the following steps:
making a histogram upon evaluation of all of the quantized gradients;
making the accumulated histogram from the histogram for the multitude of the gradient;
thresholding based on the accumulated histogram; and
labeling the thresholded image.