When we look at reflected light L of a dielectric material, such as plastic, we usually see two distinct reflection components: a specular or interface reflection component Ls and a diffuse or body reflection component Lb. The specular or interface reflection occurs at the surface and in only one direction, such that the incident light beam, the surface normal, and the reflected beam are coplanar, and the angles of the incident and reflected light with respect to the surface normal are equal. In general, Ls has approximately the same power distribution as the illumination and appears as a highlight or gloss on the object. Hence, it is also referred to as illuminant color.
As shown in FIG. 1, not all of the incident light is reflected at the surface and some penetrate into the material. The refracted light beam travels through the medium, hitting pigments from time to time. Within the material, the light rays get reflected and refracted repeatedly at boundaries that have different refractive indices. Some of the scattered light ultimately find their way back to the surface and exit from the material in various directions, forming a diffuse or body reflection component Lb. This component carries object color information and is also referred to as object or body color.
The dichromatic reflectance model (DRM) assumes that the reflected light L from a dielectric object surface such as plastic, paint, paper, and ceramic is a mixture of the specular reflection component Ls and the body reflection component Lb. Mathematically, the model can be formulated as:L(λ,φ,ψ,θ)=Ls(λ,φ,ψ,θ)+Lb(λ,φ,ψ,θ), or simply,L(λ,φ,ψ,θ)=ms(φ,ψ,θ)Cs(λ)+mb(φ,ψ,θ)Cb(λ)  Eq. (1)where λ is the wavelength, and the parameters φ, ψ, θ describe angles of the incident and emitted light and the phase angle. The terms, ms and mb, are the geometric scale factors, while Cs(λ) and Cb(λ) are the specular color and body color vectors, respectively.
If the description of the scene radiance is restricted to the three narrow wavelength bands of the red, green and blue spectra range of visible light, as it is the case in a TV camera, then the scene radiance can be represented as a 3×1 color vector:C(x,y)=ms(φ,ψ,θ)Cs+mb(φ,ψ,θ)Cb(x,y)  Eq. (2)where (x, y) specifies a pixel position and C(x, y) is the observed color vector. As can be observed in FIG. 2, this equation implies that the observed color C(x, y) is distributed on a plane which is spanned by the two vectors Cs and Cb(x, y). The term Cs does not depend on the pixel position since it represents a single illuminant color. Thus if the vectors Cs and Cb(x, y) can be estimated, the specular reflectance can be distinguished from the diffuse reflectance, thereby enabling the diffuse reflectance, in other words the true color, to be obtained.
In prior arts, the above estimation is carried out by a multiple illumination method (N. Nayar, K. Ikeuchi and T. Kanade, “Determining Shape and Reflectance of Hybrid Surfaces by Photometric Sampling”, IEEE Trans. on Robotics and Automation, Vol. 6, No. 4, pp. 418-431 (1990)), or by a multiple image method (M. Otsuki, Y. Sato, “Highlight Separation Using Multiple Images with Intensities and Ranges”, MVA '96, IAPR Workshop on Machine Vision Application, pp. 293-296 (1996)). In the multiple illumination method, a plurality of illumination sources are used while viewing the image from the same point, in other words, the CS component is varied to estimate the Cb component and solve equation (2) to obtain the true object color. In the multiple image method, the same effect is achieved using a single fixed light source and observing the image of the object from several different orientations.
The prior arts employ DRM and attempt to estimate the object color using the patterns of reflectance color distribution that pixels of various regions of an image form in color space. However, the approach is not effective for correct estimation of object colors when an analyzed region of the image consists of several different color clusters, as in the case of an image of an object with fine or intricate color variations. Furthermore, both of the above methods require specialized apparatus (i.e. multiple light sources or a device enabling the capturing of images from different orientations) and setup environment and take long processing times. These methods therefore cannot be applied flexibly to objects under all environments and do not adequately answer the high-speed requirements of machine vision applications.