There are already a large number of measurement principles today by means of which it is possible to capture the three-dimensional shape of objects. Examples include the principles of triangulation (laser light section method, stripe projection, stereo method, photogrammetry, shape from shading), interferometric methods (laser interferometry, white light interferometry, holographic methods), and runtime methods (high-frequency modulation of the light source). All of these methods have in common the fact that multiple camera images must be acquired in order to produce one single 3-D image from them. In the case of most of these methods, it is not possible for these images to be acquired at the same time. In contrast, conventional photographic 2-D images are captured with only one image. This applies in particular to image capture in industrial image processing as well.
The principal difficulty lies in the fact that the use of optical 3-D sensors requires multiple camera captures by its nature. This lies in the fact every 3-D sensor must determine three unknowns for each point of the object to be measured:
the location of the test specimen point, referred to in the following as “shape”
the local reflectivity of the test specimen, referred to in the following as “texture” (black/white)
the local brightness of the ambient light at each point
As a rule, three equations are also necessary to determine three unknowns; in terms of 3-D, these are the three camera images with the local brightness of the test specimen, with the three camera images being acquired using three different lighting situations. This is not necessary in the case of the 2-D methods because here only the sum of all influences, e.g., of shape, texture, and ambient light, is ever reproduced in one image.
The number of unknowns is reduced to two unknowns insofar as it is possible for all ambient light to be shaded off. In this case, only two images with two different lighting situations are necessary to detect the 3-D shape of the test specimen.
Even in 3-D methods, there are approaches that allow all of the necessary information to be attained using one single camera image. One such method is spatial phase shifting. In its application, it is limited to interferometry and stripe projection. In this method, the various lighting situations are realized using an interference pattern or a stripe pattern that has regions of different lighting intensities. Therefore, it is possible to detect three different lighting situations at three neighboring picture elements, from which the three unknowns may then be calculated. However, this method cannot be applied to the shape from shading method, in particular not for photometric stereo or photometric deflectometry (see WO2004051186, DE102005013614) or for runtime methods because, in this case, an object is lit from different directions at different times and thus a simultaneous recording of multiple lighting situations is not possible.
However, for many applications, in particular for applications in industrial image processing, it is important for all image information to be recorded simultaneously or virtually simultaneously. Only in this manner is it possible for test specimens in motion to be measured and analyzed without blurring from movement. For this purpose, the exposure time of a camera is reduced to a minimum or a flash is selected. The cameras are equipped with a so-called electronic shutter in order to control the exposure time. All picture elements (pixels) of the camera chip are connected simultaneously in a photosensitive manner for a predetermined time. Typical exposure times range from a few milliseconds to several microseconds. The process of reading the camera chip occurs after exposure. Here, exposure may occur much more quickly than reading, which normally takes several milliseconds. Therefore, an image acquisition period is composed of exposure and reading the cameral chip. Here, the duration of the image acquisition period is determined by the reading time, which takes substantially longer than exposure. By virtue of the long image acquisition period, the refresh rate, i.e., the number of images that may be recorded per second, is reduced. The refresh rate is therefore also determined by the reading time. Specially designed and expensive high-speed cameras which, for example, are able to record a few thousand images per second, represent the exception. Thus, the optical 3-D method has a decisive disadvantage. Instead of an exposure time of a few microseconds, a series of, for example, four images (for example, 20 ms each per image acquisition) requires an acquisition time of 80 ms, over 1000 times that of the 2-D method.