Computer vision systems are used in a number of applications, such as automatic manufacturing using robots. Most robots can only operate in restricted and constrained environments. For example, parts in assembly lines have to be placed in a known pose for robots to be able to grasp and manipulate the parts. As used herein, the pose of an object is defined as its 3D position and 3D orientation due to translations and rotations.
Methods for determining the poses of objects using 3D model to 2D image correspondences are well known. Unfortunately, those methods do not work well for objects with shiny or textureless surfaces. The situation is particularly severe when multiple identical objects are placed in a cluttered scene, for example, a bin where multiple objects are piled on top of each other.
Using chamfer matching, the contour of an object can be used to identify and determine the pose. However, conventional methods fail when the imaged contour is partially occluded, or located in a cluttered background. Edge orientation can be used to improve the chamfer matching in a cluttered background. The best computational complexity of existing chamfer matching algorithms is linear in the number of contour points.
Active illumination patterns can greatly assist computer vision methods by accurately extracting features in cluttered scenes. Examples of such methods include depth estimation by projecting a structured illumination pattern.