The present invention relates to a method for processing a color image which is arranged to change colors of the color image, and more particularly to a method for changing a color image which provides a function of simulating change in a real-world environment due to factors such as weather and time in a natural image and is suitable for sales presentation of goods, layout simulation of buildings and color design of industrial products because of the simulating function.
In the field of computer graphics (referred to simply as CG), there have been proposed some techniques of creating an image by considering some factors such as time and weather for the purpose of realistically representing an object like a building in a natural environment and a scene where the object is located. Among those techniques, there exists a technique of generating an object area of an image by considering how sunlight and sky light changing due to factors such as time and weather influence an object in the image or generating an image of a sky area by considering factors such as time and weather.
There has been developed a technique of generating an image of a sky area by using a sky model which is a function of variables such as sun altitude and clear or cloudy weather. The technique is capable of representing an object by considering how the sunlight and sky light influence the object in the image. This technique was developed by Tomoyuki Nishita et al. and is discussed in SIGGRAPH '86 Conference Proceedings, Aug. 18-22, 1986, Computer Graphics, Vol. 20, No. 4, 1986, which will be referred to as reference 1.
There has been developed a technique of synthesizing a sky area of a natural image in a sky area of an image being generated, deriving proper sunlight and sky light in consideration of several meteorological conditions changing due to weather and time, and realistically representing an object by using the sunlight, the sky light and three-dimensional CAD data for the object. This technique was developed by Atushi Takagi et al. and is discussed in SIGGRAPH '90 Conference Proceedings, Aug. 6-10, 1990, Computer Graphics, Vol. 24, No. 4, Aug. 1990, which will be referred to as reference 2.
There has been developed a method of changing an object area in a natural image while keeping a texture and a shade and shadow of the object. This method is described in JP-A-2-127781, which will be referred to as reference 3. Specifically, this method is designed to consider a color of a light source in an image, that is, to simulate change of an object area in an image resulting from change of a color of the light source. In reference 3, when the three-primary color values in a color image are changed from (Rs, Gs, Bs) to desired values (Rs', Gs', Bs'), the three-primary color values of each pixel contained in an object area of the color image are changed from (R, G, B) to (R', G', B') according to the following set of expressions. EQU R'=Rs'/Rs.multidot.R EQU G'=Gs'/Gs.multidot.G (1) EQU B'=Bs'/Bs.multidot.B
Reference 3 is based on a reflection model of an object. This will be discussed with respect to FIG. 3. A ray of light 32 applied from a light source to an object 31 is divided into a light (specular reflective components) 34 reflected from an object surface 33 and another light (diffuse reflective components) 36 which enters into the object, repeatedly collides with colorants 35 contained in an object material, and emerges from the object. The specular reflective components are reflected at an angle defined by an incident angle and the surface direction of the object, and the diffuse reflective components are radiated in every direction. In a lustrous dielectric material, the specular reflective components have the same ratio of three-primary color values as the main light source and the diffuse reflective components have three-primary color values defined by the main light source and the solid reflectance of the object. Hence, the specular reflective components are referred to as a light source color and the diffuse reflective components are referred to as an object color. As shown in FIGS. 4(a) and 4(b), the light source color Cs=(Rs, Gs, Bs) 45 and the object color Cb=(Rb, Gb, Bb) 46 representing the diffuse reflective components appear in a distribution 44 within a three-primary color space of pixel values of an object area 43 of a color image 42 of an object 41. Thus, a pixel value C=(R, G, B) of the object area can be represented as follows. ##EQU1## wherein Ms and Mb denote a scalar quantity. Ms is defined by a geometrical angle defined by a position of a main light source, the surface of the object and a position of a viewpoint (camera). Mb is defined by a geometrical angle defined by the position of the main light source and the object surface. When the three-primary color values of each pixel are projected onto a plane determined by the vectors Cs and Cb, it is assumed that those values are located at the spot defined by Ms and Mb.
As is obvious from the above expression, when the color of the light source is changed, the light source color Cs is changed. In actuality, however, when the light source color is changed from Cs=(Rs, Gs, Bs) to Cs'=(Rs', Gs', Bs'), the object color is changed as well. Assuming that the reflectance is Co=(Ro, Go, Bo), the changed object color Cb' can be obtained by the following expression. ##EQU2##
The expression (1) can be executed for the entire object area.
The method disclosed in U.S. application Ser. No. 07/493,447, now U.S. Pat. No. 5,317,678 (referred to as reference 4), is desired to consider how a surface light source like a sky light as well as a single light source influence an object.
Reference 4 is useful in representing an image formed by imaging an object in a natural environment, that is, an object in an environment wherein a surface light source like the sky light exists in addition to a single or main light source like the sun. Reference 4 makes it possible to simulate how the object appears when the light environment changes due to changes in environmental factors such as time and weather in the natural environment. That is, it is possible to determine the relation between the sunlight and the sky light and the environmental factors such as time and weather, wherein the sunlight is a main light source and the sky light is a surface light source.
The sky light will be discussed with respect to FIG. 5. In general, light 51 from the main light source influences a clear sky 52 around an object 53, so that the clear sky 52 serves as a surface light source for the object 53. This surface light source produces a sky light 54 which colors a shadow 55 on the object 53 which receives no light from the main light source. According to reference 4, the color of the object can be represented by the following expressions. ##EQU3## wherein Cap (expression (6)) is the color of the sky light 54 from the surface light source and Ca (expression (5)) is the color of light 56 reflected from the surface of the object 53 at a solid reflectance Co=(Ro, Bo, Go). The vector Cd represents a shift from a plane defined by a light source color and an object color for each pixel value. The shift represents the texture of the object. The vector B represents an influence of an image input device such as a camera or a scanner on the image. FIG. 6 shows how each pixel value of an object is distributed in a three-primary color space, based on the model of reference 4.
CG-based methods including those described in references 1 and 2, have made it possible, although difficult, to obtain a realistic image by generating lots of data such as three-dimensional data of an object and doing lots of calculations.
Reference 1 provides a function of simulating the change of a sky area resulting from the change of weather, time and the like. However, the simulating function does not consider scattered clouds in the sky area when simulating the change of the sky area. Hence, it needs additional operations, such as creating clouds by using a CG method, for the purpose of obtaining a sky area which is as realistic as the sky area of a natural image.
Reference 2 is capable of representing an object very realistically. For that purpose, however, it requires very detailed modeling of a three-dimensional object and a light source, and several measured data such as meteorological data and a reflectance of an object, in addition to lots of calculations. Further, the main purpose of reference 2 is to simulate the change of how an object appears based on factors such as weather and time. As such, no image of a sky area is generated.
References 3 and 4, like the present invention, use information about a natural image and do not need to generate a lot of data, thus making it easy to generate a realistic image, unlike the CG-based methods. However, references 3 and 4 have the following disadvantages.
That is, reference 3 considers an object illuminated by a single light source. Hence, it does not have a sufficient capability of representing an object illuminated by a sky light. Further, it does not consider the change of the sky area in a natural image when representing the object.
Reference 4 does consider a surface light source like a sky light as a light source. Hence, it can represent an object illuminated by the sky light. However, like reference 3, it does not consider the change of the sky area in a natural image.