The present invention relates to a system for observing objects in three dimensional space using structured light. More particularly, the invention relates to a multi-camera, three dimensional sensing system providing non-contact gauge measurements of an object using known object templates and known epi-polar geometry to identify observable projections of structured light on the surface of an object; and, to reconstruct corrupted portions of images of the projected structured light.
As shown in FIG. 1, measurement systems such as laser range finders, illuminate an object undergoing measurement using structured light. Reflections of structured light projected on the surface of the object are captured by two or more calibrated cameras, generating images of illuminated portions of the object's surface. In some applications, structured light is in the form a set of laser planes. Where laser planes intersect the surface of the object, a striping effect is achieved. By detecting these laser light stripes in images of the object's surface, point correspondences can be established and triangulation techniques employed to reconstruct a representation of the surface of the object.
First, a single pixel on each stripe in an image from a first camera is selected, as seen in FIG. 2. Given that the position of a first camera lens in space is known and the selected pixel is known, then it is known that the point corresponding to the selected pixel lies on a known line drawn from the lens center out into space. This line appears as a line in an image from a second camera. This line is called the epi-polar line of the selected point in the first image. Since the position of the lens center of the second camera is also known, this epi-polar line can be calculated and drawn in the second image. The epi-polar line, when drawn in the second image, will intersect at least one, and most likely several, of the stripes of the second video image. It is known that one of the pixels where the epi-polar line and a stripe intersect represents the selected point in the first image. The actual coordinate location in space of the point corresponding to any of these intersection points is determined by simple triangulation. Since the position in space of each plane of light which created the stripes is also known, the single point of all the intersection points which correspond to the selected point in the first image is ascertained by determining the three-dimensional coordinate of each intersection point to determine if it lies on one of the known planes of light. The intersection point which lies closest to a known plane of light is taken as the selected point.
The surface of many objects includes regions which are shiny or have otherwise poor reflectivity characteristics. Structured light projected onto these surfaces results in multiple reflections or clutter causing poor laser stripe identification in generated images. Furthermore, in some situations, entire image regions which are representative of portions of an object's surface may become corrupted due to noise or interference.
Accordingly, there is a need for a method of reconstructing projected laser stripes in those portions of images which are heavily corrupted due to reflection, noise, or interference.