It is expected that cellphones with a camera, digital cameras, digital movie cameras and other imaging devices would achieve as high definition as an HDTV in the near future. Meanwhile, people are trying to reduce the sizes of those devices as much as possible in order to further increase their additional values. However, if the size of an optical system or an image capture device were reduced significantly, then the basic performance of the imaging device would decline too much to maintain the minimum required level in terms of sensitivity or diffraction limit of lens. For that reason, such a high-resolution trend should hit a plateau sooner or later.
However, even when the high-resolution trend hits a plateau, the image quality can still be improved by adding numerous pieces of information about various physical properties, which can be used to generate an image with the aid of computer graphics, to the image information of the object itself. Nevertheless, for that purpose, the amount of information to collect is far too much to handle with conventional two-dimensional image processing. This is because pieces of physical information, including information about the three-dimensional shape of the object and information about the light source to illuminate the object, need to be obtained during the image generation process.
To obtain information about the shape of the object by a conventional technique, an active sensor for projecting either a laser beam or a light beam emitted from an LED onto the object or a rangefinder system such as a differential stereo vision system is needed. However, such a sensor or system is not just bulky but also allows a distance of at most several meters between the camera and the object. Besides, such a sensor or system cannot be used unless the object is a solid and bright diffusive object. Under such a restriction, the sensor or system cannot be used to shoot an object located at a distance outdoors (e.g., on a field day) or take a close up photo of a person with his or her hair and clothes shot as beautifully as possible.
To obtain shape information about a completely passive object to be shot either outdoors or in a normal shooting environment, polarization may be used according to some technique. Patent Document No. 1 discloses a method for monitoring specular reflection components with a polarizer, which is arranged in front of the lens of a camera, rotated. According to this method, the local normal information about the object can be obtained even without making any special assumption about the light source to illuminate the object (i.e., which may be either randomly polarized light or non-polarized light).
A normal to the surface of an object has two degrees of freedom. The surface normal is included in an incident plane including an incoming light ray and a reflected light ray. Hereinafter, this point will be described.
As shown in FIG. 31(a), a light ray that has come from a light source and reached an observation point is reflected from the observation point and then reaches the camera's focus position (which defines an imaging viewpoint). The angle θs defined between the incoming light ray and a surface normal at the observation point is the angle of incidence. In cases of specular reflection, the angle defined between the light ray reflected and emerging from the observation point (i.e., outgoing light ray) and the surface normal at the observation point (i.e., the angle of emittance) is equal to the angle of incidence θs. On the other hand, in cases of diffuse reflection, irrespective of the location of the light source or the angle of incidence, the angle defined between the outgoing light ray and the surface normal at the observation point is always defined as the angle of emittance as shown in FIG. 31(b).
The incoming light ray, the surface normal and the outgoing light ray shown in FIG. 31(a) are all included in a single plane, which will be referred to herein as an “incident plane”. On the other hand, the surface normal and the outgoing light ray shown in FIG. 31(b) are also included in a single plane, which will be referred to herein as an “emittance plane”.
FIGS. 32(a) and 32(b) schematically illustrate two incident planes 20 on which the surface normal 12 has mutually different directions. Each of these incident planes 20 includes an incoming light ray 10a, an outgoing light ray 10b and the surface normal 12. In these two incident planes 20 illustrated in FIG. 32, the two directions of the surface normal at the observation point define an angle Ψ between them. That is to say, the surface normal on one of these two planes has rotated from the counterpart on the other plane by the angle Ψ around the optical axis of the outgoing light ray.
If the angle Ψ that defines the incident plane 20 and the angle of incidence θs on the incident plane 20 are both known, then the surface normal 12 can be determined. In other words, to determine the surface normal 12, both of these two angles Ψ and θs need to be obtained (i.e., the degree of freedom is two). The same statement also applies to the diffuse reflection. That is to say, if the angle Ψ that defines the emittance plane and the angle of emittance θd on the emittance plane are both known, the surface normal can be determined.
Hereinafter, it will be described how to determine the surface normal on a region where specular reflection is produced. Among the two degrees of freedom described above, the angle Ψ that defines the incident plane may be determined to be an angle at which the light intensity, varying with the rotation of the polarizer, becomes minimum. On the other hand, if the material of the object is known, the angle of incidence θs can be estimated based on a PFR (polarization Fresnel ratio) value, which is a quantity correlated to the amplitude of transmitted radiance (or light intensity) when the polarizer is rotated. This is because there is a certain relation between the PFR value and the angle of incidence as will be described later.
However, to obtain the PFR value of specular reflection, the diffuse reflection component and the specular reflection component need to be accurately separated from each other at a single point on the object. Also, as long as a single camera is used, two angles of incidence θs are obtained as two different solutions from the PFR value. That is to say, a unique angle of incidence θs cannot be obtained.
Patent Document No. 1 discloses a technique for estimating the PFR value and the diffuse reflection component Id of light intensity at the same time by observing the maximum and minimum values Imax and Imin of the light intensity. According to this technique, however, unless statistical processing is carried out on a group of a lot of pixels that have the same specular reflection property but significantly different light intensities or gray scales, a huge error will be produced. For that reason, the technique disclosed in Patent Document No. 1 cannot be applied generally to a specular reflection area, which is usually present just locally.
Patent Document No. 2 discloses a technique for determining a surface normal by measuring the polarization component of light that has been regularly reflected from a transparent object with a known refractive index. More specifically, the incident plane is determined by the minimum value of the polarization component of regularly reflected light and then (Imax−Imin)/(Imax+Imin), which corresponds to the PFR value, is calculated. The angle of incidence θs is also determined based on (Imax−Imin)/(Imax+Imin), thereby obtaining the surface normal.
According to Patent Document No. 2, however, the object must be a transparent object that produces only specular reflection. Also, since the specular reflection is supposed to be produced globally on the object, the object should be surrounded with a special diffuse illumination system. That is why the method disclosed in Patent Document No. 2 cannot be applied to shooting a normal outdoor scene.
Meanwhile, Non-Patent Document No. 1, which is written by the inventors of Patent Document No. 1, takes the polarization phenomena of not only specular reflection but also diffuse reflection into consideration and discloses their own theoretical formulation. However, Non-Patent Document No. 1 just applies that formulation to classifying image edges and neither teaches nor suggests the possibility of applying that formulation to determining the surface normal.
Non-Patent Document No. 2 discloses a special type of image sensor for obtaining light intensity information and polarization information at the same time without rotating the polarizer. Non-Patent Document No. 2 says that they carried out a demonstrating experiment for obtaining normal information (representing the curves of a car body, for example) in an outdoor scene based on the polarization intensities in four directions by performing real time processing using that special type of image sensor. If such an image sensor were introduced into a camera, polarization information could certainly be obtained as a moving picture. But Non-Patent Document No. 2 does not disclose the details of algorithm to be actually used in figuring out a normal based on the polarization information.                Patent Document No. 1: U.S. Pat. No. 5,028,138        Patent Document No. 2: Japanese Patent Application Laid-Open Publication No. 11-211433        Non-Patent Document No. 1: Lawrence B. Wolff et al., “Constraining Object Features Using a Polarization Reflectance Model”, IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 13, No. 7, July 1991        Non-Patent Document No. 2: Kawashima, Sato, Kawakami, Nagashima, Ota and Aoki, “Development of Polarization Imaging Device and Applications by Using Patterned Polarizer”, Institute of Electronics, Information and Communication Engineers of Japan, National Conference 2006, No. D-11-52, p. 52, March 2006.        