1. Field of the Invention
The present invention pertains to a three-dimensional data processing device and a three-dimensional data processing method, and more particularly, a three-dimensional data processing device and a three-dimensional data processing method that splices together two pieces of three-dimensional data obtained from different directions.
2. Description of the Related Art
Conventionally, the light-section method is often used as the most practical means among various means to recognize the three-dimensional configuration of an object, for example. This light-section method is explained with reference to FIG. 1.
Camera 2 comprises semiconductor laser irradiating unit 5 and projecting optical system 8 to project in a slit configuration the laser light output from semiconductor laser irradiating unit 5 to target object 1. Semiconductor laser irradiating unit 5 can move from right to left or vice versa (the direction indicated by an arrow in the drawing) so that the slit light can be moved laterally.
Slit laser light S is irradiated onto target object 1 and a slit image of target object 1 that corresponds to slit light S is caught on the camera's image pick-up plane. Spatial coordinates of point p on the target object that corresponds to certain point p' on the slit image can then be obtained as the coordinates of the point at which plane S created by the slit light and straight line L connecting point p' and center point O of the lens of the image pick-up device (not shown in the drawing) intersect. In this way, the spatial coordinates of points on the surface of the target object that correspond to points on the slit image can be obtained from one image. By repeatedly moving the slit light laterally and repeating image input, three-dimensional data regarding the target object can be obtained.
Where the three-dimensional data for the entire target object is to be obtained, since image sensing as to only a certain range can be performed in one session, two or more image sensing sessions are carried out by changing the position of the camera or the position of the target object, and two or more pieces of three-dimensional data thus obtained are spliced together to create one piece of data pertaining to the entire target object.
Conventionally, two or more pieces of three-dimensional data that were obtained in the method described above are spliced together using conversion parameters calculated based on the position of the camera or the position of the rotary stage on that the target object is mounted, which are mechanically detected with high accuracy.
In the conventional method to splice together two or more pieces of three-dimensional data, it is necessary to mechanically detect the position of the camera or the position of the rotary stage with high accuracy, and if it is attempted to improve the accuracy in data splicing, the device would become very costly.
Moreover, there are cases where one would like to input three-dimensional data pertaining to the entire object without the use of equipment such as a rotary stage. In such a case, the orientation of the object or the position of the three-dimensional input camera must be manually changed. The user must then look for the same points on the target object in each adjacent piece of three-dimensional data as corresponding points such that data splicing can be done based on said corresponding points. However, when two pieces of three-dimensional data taken from two different directions are spliced together, points need to be sought in a condition in which the relative positional relationship between the pieces of three-dimensional data is quite unknown, which requires very complex and time consuming calculation.
It is therefore conceivable that three-dimensional configured images based on the pieces of three-dimensional data input from two different directions can be displayed and the user manually input the corresponding points of the images. However, in this case, because a three-dimensional configuration is displayed on a two-dimensional display device, it is difficult for the user to recognize the configuration at a glance and therefore the user cannot easily specify the corresponding points of the images.