Advanced three-dimensional (3D) scanning systems intended for 3D graphics applications, and in particular for realistic rendering applications, capture both the shape (geometric properties) and photometric properties of objects. Machine vision systems for inspecting surface finish quality also capture both the geometric and photometric properties of objects.
As employed herein, capturing the “shape” of an object means to model mathematically the 3D space occupied by the object, while capturing the “photometric properties” of an object means to model mathematically how the object interacts with light, such as how the object reflects and refracts light.
There exist in the literature several predictive models for light reflection, with different levels of complexity depending on the shape and on the type of considered object material. The data required to “fit” such models is composed of images acquired under different {yet known} illumination conditions. In practice, the shape and photometric data may be obtained by using several light sources of relatively small size, positioned at some distance from the object and preferably somewhat evenly distributed around it. A representative example of one such object illumination and image capture system 10 is shown in FIG. 1. Reference may also be had to FIG. 9 of commonly assigned U.S. Pat. No. 6,455,835 B1, “System, Method, and Program Product for Acquiring Accurate Object Silhouettes for Shape Recovery”, By Fausto Bernardini, Henning Biermann, Holly E. Rushmeier, Silvio Savarese and Gabriel Taubin (incorporated by reference herein), for showing an array of M (e.g., nine) light sources mounted on a frame with a color camera.
In FIG. 1 the object illumination and image capture system 10, also referred to herein as a “scanning system”, includes an outer frame 12A and an inner frame 12B. A camera, such as a color camera 100, is mounted on the inner frame 12B, and a plurality, e.g., five, halogen light sources 210, 220, 223, 240 and 250 are mounted on the outer frame. A laser scanning device 150 for capturing object shape information may be provided as well. Representative dimensions are a height (H) of 50 cm, and a width (W) of 100 cm.
The goniometric (i.e., directional) distribution of the lights 210–250, and their locations with respect to the camera 100, are determined a priori. Nominal goniometric data is typically provided by the light source manufacturer, but vary from source to source and over time. Exemplary light distribution is illustrated in FIGS. 2A and 2B for ideal and real light sources, respectively. The location of the light sources is measured in the context of a particular data acquisition system.
An example of a scanning method that uses small light bulbs with calibrated position and known directional distribution for photometric stereo can be found in R. Woodham, “Photometric method for determining surface orientation from multiple images”, Optical Engineering, 19(1):139–144, 1980. One application of photometric stereo is in the inspection of surfaces to detect flaws or cracks (see, for example, M. Smith and L. Smith, “Advances in machine vision and the visual inspection of engineered and textured surfaces”, Business Briefing: Global Photonic Applications and Technology, World Markets Research Center, pages 81–84, 2001). Another application of photometric stereo is the recovery of finger prints: G. McGunnigle and M. J. Chantler, “Recovery of fingerprints using photometric stereo”, in IMVIP2001, Irish Machine Vision and Image Processing Conference, pages 192–199, September 2001. Capturing images of objects illuminated by small light bulbs with calibrated position with a geometrically calibrated camera is also used to compute the surface properties (color and specularity) of objects for use in computer graphics rendering systems. A summary of such methods can be found in F. Bernardini and H. Rushmeier, “The 3d model acquisition pipeline”, Computer Graphics Forum, 21(2), 2002. Recent publications that describe this type of application in more detail are: Hendrik P. A. Lensch, Jan Kautz, Michael Goesele, Wolfgang Heidrich and Hans-Peter Seidel, “Image-based reconstruction of spatially varying materials”, in Rendering Techniques '01, London, UK, June 2001, and H. Rushmeier and F. Bernardini, “Computing consistent surface normals and colors from photometric data”, in Proc. of the Second Intl. Conf. on 3-D Digital Imaging and Modeling, Ottawa, Canada, October 1999. The computer graphics rendering of captured objects is used in many applications, such as feature films, games, electronic retail and recording images of cultural heritage.
Various techniques have been employed in the past to address the problem of measuring either the position of a light source or its directional distribution. One method for measuring the position of small light sources is to use a separate digitizing system, such as a robotic arm or other portable coordinate measurement system. An example of such an arm is described in U.S. Pat. No. 5,611,147, “Three Dimensional Coordinate Measuring Apparatus”, Raab. Another method for measuring light source position is to observe two or more shadows of objects, with known dimensions, on a plane, where for each the coordinates are known in terms of the camera coordinate system. Such a method is described in U.S. Pat. No. 6,219,063, “3D Rendering”, Bouguet et al. Knowing the position of the base of two objects, and the end of their shadow (or the same object in two locations), the light source position can be computed by finding the intersection of the rays joining each end-of-shadow and object tip pair.
An ideal light source emits light isotopically, as shown in FIG. 2A. However, all practical (real) light sources emit light with a directional distribution, as shown in FIG. 2B. A number of complex methods have been devised to measure the directional distribution from a light source, and a summary of such methods can be found in U.S. Pat. No. 5,253,036 “Near-Field Photometric Method and Apparatus”, Ashdown. These methods generally involve taking numerous individual measurements with a light meter in the sphere of directions surrounding the light source.
A robotic arm (as in U.S. Pat. No. 5,611,147), or a portable coordinate measurement system, can very accurately measure light source position. However, a robotic arm can be very costly, particularly if a large workspace (the distance between the light sources used) is considered. A robotic arm also requires substantial human intervention, since the arm tip has to be manually placed at the center of each light source to make the measurements. Finally, after finding the light source positions, a robotic arm cannot be used to make measurements of the light source directional distribution. A separate measurement technique is required to make the measurement of directional distribution.
The shadow casting method described in U.S. Pat. No. 6,219,063 is limited in the space in which light positions can be calibrated. The method requires that the location of the plane on which the shadows are cast is known a priori. However, the location of the plane can only be known in camera coordinates if that plane itself is used for the original camera calibration. This limits the orientations of shadows that can be observed. Furthermore, a planar calibration method requires that the plane used cannot be perpendicular to the line-of-sight of the camera. Also, the method described in U.S. Pat. No. 6,219,063 results in only approximate light locations, as there is no method to specify the precise location of the tip of the shadow casting object. The tip of the shadow-casting object is specified only to the accuracy of the width of the shadow-casting object. Since there is no unique feature of the object base, a point on the base must be specified manually by the user. The result is also prone to error since the shadow tip is not well defined, and can only be reliably located by the user manually locating a pixel that coincides with the tip of the shadow-casting object. This technique also requires two or more images for each light source, and no method for computing the light source distribution in the scanning area is included.
Determining the light source directional distribution with a series of light meter measurements as described in U.S. Pat. No. 5,253,036 is time consuming, and furthermore requires the use of an additional device beyond the camera and light sources needed in the photometric or 3D scanning system.
As can be appreciated, the foregoing prior art techniques for light source calibration are not optimum, as they involve increased cost and complexity, and/or manual user intervention which can give rise to user-introduced errors.