In manufacturing and assembly processes, it is often desirable to analyze an object surface to determine the nature of features and/or irregularities. The displacement (or “profile”) of the object surface can be determined using a machine vision system (also termed herein “vision system”) in the form of a laser displacement sensor (also termed a laser beam “profiler”). A laser displacement sensor captures and determines the (three dimensional) profile of a scanned object surface using a planar curtain or “fan” of a laser beam at a particular plane transverse to the beam propagation path. In a conventional arrangement, a vision system camera assembly is oriented to view the plane of the beam from outside the plane. This arrangement captures the profile of the projected line (e.g. extending along the physical x-axis) on the object surface, which, due to the baseline (i.e. the relative spacing along the y-axis) between the beam (fan) plane and the camera causes the imaged line to appear as varying in the image y-axis direction as a function of the physical z-axis height of the imaged point (along the image x-axis). This deviation represents the profile of the surface. Laser displacement sensors are useful in a wide range of inspection and manufacturing operations where the user desires to measure and characterize surface details of a scanned object via triangulation. One form of laser displacement sensor uses a vision system camera having a lens assembly and image sensor (or “imager”) that can be based upon a CCD or CMOS design. The imager defines a predetermined field of grayscale or color-sensing pixels on an image plane that receives focused light from an imaged scene through a lens.
In a typical arrangement, the displacement sensor(s) and object are in relative motion (usually in the physical y-coordinate direction) so that the object surface is scanned by the sensor(s), and a sequence of images are acquired of the laser line at desired spatial intervals—typically in association with an encoder or other motion measurement device (or, alternatively, at time based intervals). Each of these single profile lines is typically derived from a single acquired image. These lines collectively describe the surface of the imaged object and surrounding imaged scene and define a “range image” or “depth image”.
Other camera assemblies can also be employed to capture a 3D image (range image) of an object in a scene. For example, structured light systems, stereo vision systems, DLP metrology, LIDAR-based systems, time-of-flight cameras, and other arrangements can be employed. These systems all generate an image that provides a height value (e.g. z-coordinate) to pixels.
A 3D range image generated by various types of camera assemblies (or combinations thereof) can be used to locate and determine the presence and location of points on the object surface. In certain vision system implementations, a plurality of displacement sensors (e.g. laser profilers) can be mounted together. In the example of a laser profiler, the object moves in relative motion with respect to the camera(s). This motion can be the basis of a common (motion) coordinate system for all displacement sensors.
Many vision systems implement a so-called volume tool. This tool is designed to translate the positional data of the object surfaces a quantitative measurement of object volume based on the imaged size of the object within the field of view (FOV). The size of the object is compared to a calibrated value for pixels in the FOV and this is used to determine various measurements (e.g. millimeters, cubic centimeters, etc.). By way of example, the volume tool returns a measurement for the object in cubic centimeters.
A particular challenge is measuring the volume of an object that includes surfaces not readily visible or discernable to the to the displacement sensor. For example, where the object 110 contains an undercut surface 112 as shown in the vision system arrangement 100 FIG. 1, the laser fan 120 illuminates a line on the object 110 that is not visible to the displacement sensor's (130) optics 132 and imager 134 in the region of the undercut. Thus, any volumetric measurement (performed by the volume tool 152, which is part of the vision system process(or) 150 will be inaccurate, as it assumes that the undercut side is actually vertical (dashed line 114), or defines another shape that does not accurately reflect the real geometry of the object 110. Note that the object 110 in this example is supported on a conveyor and/or motion stage 170 that transmits motion information 160 (e.g. from an encoder or motion controller) to the vision system process(or) 150. Note also that the conveyor 170 and sensor 130 are in relative motion in a direction perpendicular to the page of the figure.