Apparatuses and methods of this type are used in particular in mechanical engineering, automotive engineering, in the ceramics industry, shoe industry, jewelry industry, dental technology and in human medicine (orthopedics) and other areas, and find use for example for the measurement and protocoling of quality control, reverse engineering, rapid prototyping, rapid milling or digital mock-up.
The increasing demands for a largely complete quality control in the running production process and for digitalization of the spatial form of prototypes mean the recording of surface topographies becomes a measurement task with increasing frequency. The object here is to determine the coordinates of individual points of the surface of the objects to be measured within a short period of time.
Measurement systems using image sequences, which are known from the prior art, for determining 3D coordinates of measurement objects, which can be configured for example as portable, handheld and/or fixedly mounted systems, here generally have a pattern projector for illuminating the measurement object with a pattern, and are therefore sometimes also referred to as pattern-projecting 3D scanners or light structure 3D scanners. The pattern projected onto the surface of the measurement object is recorded by a camera system as a further constituent part of the measurement system.
As part of a measurement, the projector thus illuminates the measurement object time-sequentially with different patterns (for example parallel bright and dark stripes of different widths, the stripe pattern can in particular also be rotated for example through 90°). The camera(s) register(s) the projected stripe pattern at a known observation angle with respect to the projection. With each camera, one image is recorded for each projection pattern. Thus, a time sequence of different brightness values is produced for each pixel of all cameras.
In addition to stripes, it is also possible, however, for corresponding other patterns to be projected, such as for example random patterns, pseudocodes etc. Patterns suitable for this are sufficiently known to a person skilled in the art from the prior art. Pseudocodes enable, for example, easier absolute association of object points, which becomes increasingly difficult in the projection of very fine stripes. For this purpose, it is thus possible either to project first one or more pseudocodes in rapid succession and then a fine stripe pattern, or else, in successive recordings, different stripe patterns which become increasingly fine in the sequence, until the desired accuracy in the resolution of measurement points on the measurement object surface is achieved.
The 3D coordinates of the measurement object surface can then be calculated from the recorded image sequence using image processing according to the methods known to the person skilled in this art from photogrammetry and/or stripe projection. By way of example, such measurement methods and measurement systems are described in WO 2008/046663, DE 101 27 304 A1, DE 196 33 686 A1 or DE 10 2008 036 710 A1.
The camera system typically comprises one or more digital cameras, which are situated in a known spatial position with respect to one another during a measurement. In order to ensure a stable position of the cameras relative to one another, they are usually fixedly integrated, with known spatial positioning and alignment, together in one common housing, in particular wherein the cameras are aligned such that the fields of view of the individual cameras largely intersect. Here, two or three cameras are often used. The projector can in this case be fixedly connected to the camera system (if separate cameras are used also only to part of the available cameras of the camera system) or be positioned completely separately from the camera system.
The desired three-dimensional coordinates of the surface area in the general case, i.e. where relative positioning and alignment of the projector with respect to the camera system are fixed with respect to one another and therefore not already known in advance, are calculated in two steps. In a first step, the coordinates of the projector are determined as follows. At a given object point, the image coordinates in the camera image are known. The projector corresponds to a reversed camera. The number of the stripe can be calculated from the succession of brightness values which were measured from the image sequence for each camera pixel. In the simplest case, this is effected via a binary code (for example a gray code), which characterizes the number of the stripe as a discrete coordinate in the projector. It is thus possible to achieve a higher degree of accuracy with what is known as the phase shift method, because it can determine a non-discrete coordinate. It can be used either in supplementation of a gray code or as an absolute measuring heterodyne method.
After the position of the projector has been thus determined or in case its position relative to the camera system is already previously known, it is now possible to ascertain—for example with the intersection method—3D coordinates of measurement points on the measurement object surface as follows. The stripe number in the projector corresponds to the image coordinate in the camera. The stripe number specifies a light plane in space, the image coordinate specifies a light beam. With camera and projector position being known, the point of intersection of the plane and the straight line can be calculated. This is the desired three-dimensional coordinate of the object point in the coordinate system of the sensor. The geometric position of all image rays must be known exactly. The beams are exactly calculated using the intersection as known from photogrammetry.
In order to achieve better accuracies in this measurement method for the calculation of the 3D coordinates, the non-ideal properties of real lens systems, which result in distortions of the image, can be adapted by a distortion correction and/or a precise calibration of the imaging properties can take place. All imaging properties of projector and cameras can be measured in the course of calibration processes known to a person skilled in the art (for example a series of calibration recordings), and a mathematical model for describing these imaging properties can be generated therefrom (using photogrammetric methods—in particular a bundling equalization calculation—the parameters defining the imaging properties are determined from the series of calibration recordings, for example).
In summary, in the pattern projection method or in light structure 3D scanners, illumination of the object with a sequence of light patterns is thus necessary in order to enable an unambiguous depth determination of the measurement points in the measurement region with the aid of triangulation (intersection). Thus, usually a plurality of recordings (i.e. a series of images) under illumination of the measurement object with corresponding different pattern projections (i.e. with a corresponding series of patterns) is necessary in order to ensure a sufficiently high accuracy with respect to the measurement result. In the handheld systems known from the prior art, such as for example in the measurement apparatus described in WO 2008/046663, the illumination sequence must here take place so quickly that a movement by the operator during the recording of the series of images does not cause measurement errors. The pixels recorded by the cameras of the individual projection must be able to be assigned with respect to one another with sufficient accuracy. Thus, the image sequence must take place faster than the pattern or image shift caused by the operator. Since the emittable optical energy of the projector is limited by the available optical sources and by radiation protection regulations, this results in a limitation of the detectable energy in the camera system and thus to a limitation of the measurement on weakly reflective measurement object surfaces. The projectors are furthermore limited in terms of the projection speed (image rate). Typical maximum image rates of such projectors are for example around 60 Hz.
For a measurement operation comprising projection of a series of patterns and recording of an image sequence of the respective patterns with the camera system, for example a measurement duration of approximately 200 ms is necessary with conventional measurement apparatuses (an example: for recording sequences from 8 to 10 images with an exposure duration of 20 ms to 40 ms per image, for example total recording times or measurement durations of between 160 ms and 400 ms per measurement position can result).
In the case of insufficient steadiness or in the case of insufficiently high position and alignment stability of the camera arrangement, of the projector (or, if appropriate, of a measurement head containing the camera arrangement and projector in an integrated fashion) and of the measurement object relative to one another during a measurement operation (in a measurement position), various undesirable effects can occur, which make the evaluation more difficult, more complicated, even impossible, or effects that adversely affect at least the attainable accuracy.
Unsatisfactory unsteadiness of the camera arrangement, of the projector (or, if appropriate, of a measurement head containing the camera arrangement and projector in an integrated fashion) or of the measurement object can here have various causes.
First, vibrations in the measurement environment (for example if the measurements are carried out at a production station integrated in a production line) can be transferred to the holder of the measurement object or to a robot arm holding the measurement head and thus result in disturbing oscillations. Therefore, measures which are complicated for oscillation damping have thus far been necessary, or it is necessary to move to specific measurement spaces, which, however, makes the production process significantly more complicated (since removal of the measurement object from the production line and its transport into the measurement space that has been specifically designed therefor are necessary).
In handheld systems, the main cause for unsatisfactory unsteadiness is in particular the natural tremor in the hand of the human user.
Negative effects to be mentioned here, which can be caused by a lack of position and orientation stability of the camera arrangement, of the projector and of the measurement object relative to one another, are, firstly, motion blur and/or camera shake in individual recorded images of an image sequence.
Secondly, however, unconformities of the individual images of an image sequence relative to one another with respect to their respective recording positions and directions relative to the measurement object (that is, variability in the recording positions and directions in the individual images within an image sequence) can occur, such that respective association of pixels in the individual images with identical measurement points on the measurement object surface is either made entirely impossible or can be made possible only with enormously high computational complexity and inclusion of information from a multiplicity of images of the same region of the measurement object surface (i.e. it might be necessary to subsequently bring the individual images into a spatial relationship in a computational manner, which is very labor-intensive, and this is why up to now, partially as a preventative measure against this effect, an excess of images per image sequence have been recorded, which mainly serve only for calculating back the spatial relationship of the recording positions and directions of the individual images among one another).
In order to expand the measurement region on the measurement object (for example for measuring an object in its entirety), frequently a plurality of measurements in succession (from various measurement positions and at different viewing angles of the cameras relative to the measurement object) are necessary, wherein the results of the various measurements are subsequently linked to one another. This can take place for example by the capturing regions being selected in each case in an overlapping fashion in the respective measurement operations and by the respective overlap being used for correspondingly joining together the 3D coordinates obtained in several measurement operations (i.e. point clouds) (i.e. identical or similar distributions in the point clouds determined in the individual measurement operations can be identified and accordingly the point clouds can be joined together).
This joining operation, however, is generally extremely intensive in terms of calculation and requires a not insignificant and disturbingly high outlay in terms of time and energy even if the greatest processor powers are available. When for example a robot arm is used to hold and guide the measurement head, a reduction of the computational outlay that is necessary for the joining operation can thus be achieved by capturing the recording positions and directions in the individual measurements on the basis of the respective robot arm position and using them for the joining as prior information (for example as boundary conditions).
The disadvantages in this case are the relatively low accuracy with which the measurement position is determinable on the basis of the robot arm position, and—nevertheless—the requirement that such a robot arm be present. Thus, the computational power necessary for joining together measurement results of a plurality of measurement operations cannot be reduced in this manner for handheld measurement systems.
Further disadvantages of systems of the prior art which use substantially coherent optical radiation for pattern illumination are—owing to undesired speckle fields occurring in the respective patterns of the pattern sequence—local measurement inaccuracies or measurement point gaps.