In recent years, with the evolution of robotics technologies, robots have increasingly taken the place of humans in achieving complicated tasks that humans typically perform, such as assembling of industrial products. To grasp a part and assemble parts together to manufacture an industrial product using a robot, the position and orientation of each of the parts from the view point of the robot need to be accurately measured. To measure the position and orientation of each of the parts, a method called model fitting is typically employed. In model fitting, the position and orientation is calculated so that the three-dimensional (3D) shape model of the part fits the grayscale image obtained from a camera or the range image obtained from a range sensor. More specifically, the image feature detected from the grayscale image or 3D point cloud obtained from the range image are matched to the model feature and, thereafter, the position and orientation is calculated so that the sum of errors between the matched features in the image plane or the 3D space is minimized.
When the position and orientation of a part is measured using model fitting to the grayscale image or the range image, the placement of a camera and a range sensor relative to the part is an important factor because it greatly affects the measurement accuracy of the position and orientation. Accordingly, the camera or the range sensor needs to be placed so that the measurement accuracy is maximized. For example, if a range sensor is used, the range sensor is placed so that a point in each of three planes having different normal vectors is sufficiently observed. In this manner, the position and orientation of the part can be accurately measured. NPL 1 describes a method for selecting the optimum placement from among a plurality of candidates of placement of the range sensors by estimating the uncertainty (the error) of the position and orientation of an object to be measured on the basis of the uncertainty included in measurement data obtained from the range sensor.
In NPL 1, it is assumed that no error occurs in matching between the measured data and the 3D shape model and, thus, an error in the position and orientation obtained through model fitting is caused by only an error in the measurement data. However, since in reality, an error occurs in matching between the measurement data and the 3D shape model, the accuracy of the measured position and orientation is lower than in the case in which only the measurement data has an error. For example, when the 3D shape model (the line segment based model) of an object is applied to edges detected from a grayscale image and if a plurality of the edges are in close proximity to one another in the image, a line segment of the model may be matched to a wrong edge. In such a case, the accuracy of the position and orientation significantly decreases. Consequently, even when the placement of the camera or the range sensor is determined on the basis of the method described in NPL 1, the measured position and orientation is not always accurate if the matching has an error.