It is known to capture coordinates of object surfaces by triangulation. In particular, solutions are known in which light patterns are projected onto the object surface and an image of the object surface and thus of the light pattern, which has been changed by the object surface, is captured by a capturing device (for example by a camera). In other words, the light pattern is captured which is reflected back by the object surface and in the case of a non-planar object surface is at least locally changed (for example distorted and/or deformed) with respect to its original state. Light in the context of the present disclosure is understood to mean not only electromagnetic radiation in the visible range, but also electromagnetic radiation in other wavelength ranges. By way of example, the light pattern can be a stripe pattern.
As part of triangulation (or, in other words, using what is known as intersection), complete three-dimensional (3D) coordinates of the object surface can be determined from positions of measurement points in a depth dimension (transversely to the profile) of the object surface and in the knowledge of the measurement setup (for example the position of a pattern projection unit and/or capturing device). A coordinate or coordinate value consequently typically includes a X-, a Y-, and a Z-component in a Cartesian coordinate system having the coordinate axes X, Y, and Z. The measurement principle discussed is also explained, for example, in DE 10 2013 114 687 A1.
The totality of the coordinate measurement values (or measurement data) obtained in this way can be present in the form of what are known as 3D point clouds and describe, for example, an external shape (i.e., a surface) of at least one selected region of the object. In particular, the measurement data can be used to determine the dimensions and geometric sizes (such as the diameter or the width), to compare results with specifications and to assess them, to calculate properties of the object (for example quality parameters), and/or to produce a three-dimensional graphic representation of the object.
As described, the coordinate measurement values produced are based on light of the projected patterns, which has been reflected back by the object surface. In this context, it is furthermore known that the back-reflected radiation contains both direct intensity components and indirect intensity components, which are also contained as corresponding intensity components in an image captured by the capturing device. The direct intensity components are due to single reflections of the transmitted radiation (or of the light contained in the projected light pattern) at the object surface. This in particular relates to the radiation in which the size of the angle of incidence with respect to the surface normal is equal to the size of the angle of reflection with respect to the surface normal. The indirect intensity components (also referred to as global intensity components), on the other hand, are due in particular to multiple reflections of the transmitted radiation (or of the light contained in the projected light pattern). In this case, radiation starting from the object surface is reflected at at least one further point. This at least one further point can be located, for example, in the environment or in an adjacent region of the object surface. In this case, the direction of the radiation that is incident on the capturing device furthermore typically differs significantly from the direction of the incident radiation in the case of a single reflection. Generally speaking, the indirect intensity components are due to the radiation components that are not the radiation components of the singly reflected radiation.
Since in the context of the triangulation principle rectilinear beam profiles of the reflected radiation (that is to say generally single reflections) are assumed, indirect intensity components falsify the measurement accuracy. In particular, the indirect intensity components can, in the context of the triangulation, result in the ascertainment of incorrect depth information.
In particular, the application of the triangulation is typically based on the assumption that what is known as the correspondence problem is also solved during a measurement. That means that it is assumed that the assignment or, in other words, the correspondence of a projector pixel (or of a pixel in the originally transmitted and unchanged light pattern) and of a camera pixel (or of a pixel in the image captured by the camera) is always uniquely determinable. However, in the case of multiple reflections, it is at least no longer possible to ascertain the point at which the pattern was first reflected by the surface. For this reason, incorrect conclusions relating to which projector pixel should be assigned a specific reflected radiation component may arise from the viewpoint of the camera. As a consequence, what are known as wrong correspondences may occur, which falsify the measurement result.
For this reason, approaches are known which attempt to limit the influence of any indirect intensity components on the measurement results. DE 10 2013 114 687 A1 teaches in this respect to define a projected pattern in such a way that it does not include mutually overlapping partial surfaces from the viewpoint of the capturing device. U.S. Pat. No. 8,811,767 B2 describes the projection of various patterns onto an object surface and respective determination of depth values, wherein the depth values over a plurality of captured images and projected patterns are compared to one another in pixel-wise fashion. If the depth values for corresponding pixels in the captured images do not correspond to one another, a disturbing influence due to indirect intensity components can be deduced and the determination of the depth value can be repeated for this pixel.
However, it has been shown that sufficient accuracy cannot always be attained with the approaches used to date and that these approaches generally require great measurement outlay.