A well-known method and device for detecting the position and posture of an object set on a plane includes storing a two-dimensional image of the object for detection as a teaching model prior to pattern matching of the teaching model with image data produced by a camera. This method and device for detecting the object is often applied when conveying parts and articles (“workpieces”) grasped with a robot where the robot is set at a predetermined position with respect to the workpieces being grasped.
Picking out an individual workpiece from a pile of disordered same-shaped workpieces positioned within a predetermined range, e.g. a field of view of a camera, in any three-dimensionally different position and posture had been found not suited for the robot. Attempts have been made to rely on CAD data as the teaching model; however, there are still limitations with detecting the workpiece when the workpiece is in certain three-dimensional positions with respect to the camera on the robot.
U.S. Pat. No. 7,200,260 B1 describes using a CCD camera to generate teaching models of a workpiece. The operator sets a work coordinate system for a workpiece fixed in place. The camera coordinate system for the camera, which is attached to a robot arm, is then calibrated and set in the robot controller. Next, the first position in space (x, y, z) and posture, or angle, of the camera with respect to the workpiece, and subsequent positions and angles that the camera will take with respect to the workpiece are set. The camera then takes these positions, four different positions are described in U.S. Pat. No. 7,200,260 B1, and captures an image at each position. These four images become four different teaching models, which are shown in FIG. 4 in U.S. Pat. No. 7,200,260 B1, and are shown in FIG. 1 herein.
U.S. Pat. No. 7,200,260 B1 describes using pattern matching to locate a workpiece that is shaped like the teaching model. Pattern matching locates an object, in this instance it would be an image of the workpiece, translated in x, y, Rz (rotation about the Z-axis) and scale, which is a percentage. Pattern matching was, at least at the time of the filing date of U.S. Pat. No. 7,200,260 B1, a two-dimensional (2D) process. In pattern matching at that time, there were no Rx or Ry computations. U.S. Pat. No. 7,200,260 B1 describes producing image data with a three-dimensional (3D) visual sensor permitting measurement of distance data, and differentiates this sensor from a CCD camera, which according to U.S. Pat. No. 7,200,260 B1 is for producing a two-dimensional image. Using the method described in U.S. Pat. No. 7,200,260 B1, when an object is found using pattern matching, the z coordinate is taken from data acquired by the 3D visual sensor, and the Rx and Ry are derived from data associated with the robot position at teach, i.e., the robot position when the appropriate teaching model image was taken. As such, if a resolution of +−3 degrees on a range of 30 degrees in Rx and Ry is desired, then 21 different teaching models are needed, which would greatly slow down the pattern matching. In addition, alignment of the 3D map generated by the 3D visual sensor to the 2D image from the CCD camera is critical to achieve reasonable pick accuracy with the robot arm.