The invention concerns a method for determining the spatial coordinates of an object according to the introductory part of the main claim as well as a device for carrying out the method.
For the contactless two-dimensional detection of surface shapes, surface geometries or coordinates of selected points, various optical principles are employed. A common feature of all the methods here is that determination of the 3-D coordinates of a surface measuring point is possible only if there are at least three independent measured values for this point. In addition, assumptions about the geometry of the measuring system are included in the result.
One method is the conventional strip projection technique, which is carried out with one or more CCD cameras and a projector (DE 41 20 115 C2, DE 41 15 445 A1). In devices of this kind, the grid lines or Gray code sequences are projected onto the surface to be measured. A CCD camera records on each of its receiver elements the intensity of a picture element on the surface. With known mathematical algorithms, phase measurements are calculated from the intensity measurements. The desired object coordinates can subsequently be calculated from the phase measurements and the image coordinates of the measuring points in the image plane of the photographic system. A condition of this is, however, knowledge of the geometry of the measuring system (orientation parameters of projector and camera) as well as of the imaging properties of the projection and imaging lens.
The number of orientation parameters to be determined can be limited considerably if only the phase measurements are used for calculation of coordinates. In such systems, the position of a single receiver element in the photographic system determines only the location of measurement, but is not evaluated as measuring information. By lighting the scene from several, but at least three directions of projection with grid lines or Gray code sequences, and viewing with one or more cameras fixed in position in relation to the object, for example, coordinates for a known geometry of the lighting system can be calculated. In all these systems the system parameters (orientation parameters) must be detected separately, this being done typically by so-called pre-calibration of the system. In this case, calibrating bodies of known geometry are measured, by means of which the geometry parameters of the measuring structure are modelled (DE 195 36 297 A1). This procedure is unusable whenever geometry parameters in further measurements cannot be kept constant, for example, due to temperature effects or as a result of mechanical stress on the system, or if, due to the complexity of the measuring task, a variable sensor arrangement is required and therefore measurement with a pre-fixed arrangement does not come into question.
Photogrammetric measuring methods overcome the difficulty of a separate calibrating procedure. The image coordinates here serve as measuring information, that is, the position of the measuring points in the matrix of the photographic system. From at least two different camera positions, the image coordinates for an object point must be known. An advantage of these measuring methods is that one surplus measurement can be obtained to every measuring point, i.e. for two camera positions there is one measurement more than is necessary for calculation of the three coordinates of a point. In this way, with a large enough number of measuring points it is possible to simultaneously calculate coordinates, internal and external orientation parameters of the cameras, and correction parameters for recording. However, difficulties arise in locating the homologous points necessary for this, above all for a very large number of measuring points. For this purpose, in elaborate image processing procedures, textures or surface structures from different photographs must be related to each other (DE 195 36 296 A1). For complete two-dimensional detection of an object surface, this is not possible with justifiable expense. Also, markings are required as link points for joining the partial views together.
In DE 196 37 682 A1 is proposed a system which overcomes these problems. Here, a projection system illuminates the scene with a series of strip images, consisting of two sequences turned through 90° to each other. Such strip images projected from two different positions onto the object allow, at the same time viewing with a fixed-position camera, evaluation according to the functional model of photogrammetry. Drawbacks of this system concept arise above all in the case of complete measurement of complex objects. With the complexity of the object to be measured, there is also an increase in the number of views necessary. But it is not sensible to increase the number of cameras, because there is an item of measuring information only at one object point which is both illuminated from two different directions and viewed by the camera. Adjustment of the measuring system, i.e. orientation of the required cameras, is furthermore all the more difficult, the more views have to be set up. For complex measuring tasks, such prospective orientation of the sensor system is not always satisfactory. A drawback of known methods is, moreover, that the result of measurement is always available for an assessment only at the end of the complete measuring process. Intermediate evaluation and, based on this, adapted positioning of the projector and camera(s) are not possible here.
From DE 100 25 741 A1 is known a method for determining the spatial coordinates of objects and/or their variation in time, in which the object is in each case illuminated from at least two directions with a series of light patterns which are recorded with a two-dimensional-resolution sensor arrangement, this being for picking up different views with different positions of the sensor arrangement. In this case, with a new position of the sensor arrangement, at least one direction of projection is selected so as to correspond to a direction of projection of the preceding position of the sensor arrangement. With these two directions of projection the phase measurements are identical, and from them can be determined a linking rule between the recording points of the sensor arrangement for the new and the preceding position. This system is self-calibrating, i.e. no geometrical or optical system variables have to be known or pre-calibrated before measurement. Calibration takes place with this known method during measurement, i.e. the calibrating camera is simultaneously the measuring camera. This method is not satisfactory with complex objects, for example, because shadows cannot be avoided.