1. Field of the Invention
The present invention relates to an apparatus for inputting and outputting three-dimensional data that determine the shape of an object by irradiating a detection light beam such as a slit light beam or a spot light beam toward the object so as to scan the object and image sensing control method.
2. Description of the Prior Art
Conventionally, a three-dimensional measuring apparatus (an apparatus for inputting three-dimensional data), which is a non-contact type and enables rapid measurement compared with a contact type, is used for data input into a CG system or a CAD system, physical measurement, visual sense of a robot or other applications.
A slit light projection method (also referred to as a light cutting method) is known as a non-contact type of three-dimensional measurement. By this method a distance image (three-dimensional image) can be obtained by scanning an object optically. The method is one of active measurement methods for taking an image of an object by irradiating a specific detection light beam (or a reference light beam). The distance image is a set of pixels that indicate three-dimensional positions of plural parts of the object. In the slit light projection method, a slit light beam having a slit-like section of the irradiated light beam is used as the detection light beam and a linear sequential scanning is performed for deflecting the detection light beam transversely of the slit. The longitudinal direction of the slit is a main scanning direction and the width direction thereof is a subscanning direction. At a certain time point in the scanning, a part of the object is irradiated and an emission line that is curved corresponding to ups and downs of the irradiated part appears on the image sensing surface (light-receiving surface). Therefore, a group of three-dimensional data that determine the shape of the object can be obtained by periodically sampling intensity of each pixel of the image sensing surface in the scanning.
A method is conventionally known in which, at a time of sampling the brightness on the image sensing surface, the objective of each sampling is limited to a partial belt-like zone (block), not the whole of the image sensing surface, on which the detection light is expected to be incident, and the belt-like zone is shifted along the subscanning direction for each sampling (Japanese Patent Application laid-open 9-145319(A)). In this method, the time required for each sampling can be shortened, so as to make scanning more rapidly, and the burden on the signal processing system can be reduced by reducing the amount of data.
By the way, the distance range (measurement range) where three-dimensional data input is possible depends on the width of the belt-like zone (the number of pixels along the subscanning direction) described above. Narrowing the width of the belt-like zone for improving the speed, therefore, causes a problem of the reduced measurement range. Thinning out the subscanning direction along which an image in the belt-like zone is read every other line, the speed can be increased while maintaining the measurement range. In such a case, however, the resolution in the depth direction of the three-dimensional data input is reduced to one half. In other words, it is impossible to obtain concavo-convex information of the object.
A CCD image sensor or a MOS type image sensor having a two-dimensional image sensing surface is used as an image sensing device (light-receiving device) of the three-dimensional measuring system described above.
The CCD image sensor is capable of resetting the accumulated electric charge and transferring the electric charge at the same timing for all the pixels on the image sensing surface. After accumulating the electric charge, each pixel data is read out sequentially. The pixel data obtained at this time are data which have been obtained at the same timing. Even when the detection light is moved, therefore, the image sensing is performed timely, thereby producing a superior three-dimensional image of the object.
In the MOS type image sensor, on the other hand, the operations of resetting, electric charge accumulation and reading are performed independently for each pixel. In other words, the image sensing timing deviates for each pixel data. It sometimes occurs therefore that a superior image of the object cannot be obtained.
With the MOS type image sensor, random access is possible and therefore the required pixels of the image sensing data can be partially selected so that the high-speed read operation is possible without reading the unrequired portion.
The use of a MOS type image sensor as an image sensing device, therefore, can improve the image sensing speed.
As described above, for assuring a high-speed calculation of a three-dimensional image, only an effective light-receiving area which is a part of the image sensing surface is read out. And in order to improve the measuring resolution to be more accurate than a value corresponding to the pixel pitch in the image sensing device, a barycenter calculation is performed on the image sensing data obtained from a plurality of effective light-receiving areas.
Conventionally, a read operation of each pixel by using the MOS type image sensor as an image sensing device is performed by a horizontal read method in which each pixel is sequentially and continuously read out along the direction perpendicular to the direction in which the detection light moves on the image sensing surface (Japanese Patent Application laid-open No. 7-174536(A)).
When reading the image sensing data obtained by the MOS type image sensor, however, the effective light-receiving area is shifted line by line. Therefore, the timing of reading a specific intended pixel in the effective light-receiving area is quickened each time of shifting of the effective light-receiving area, thus deviating from the timing of shifting the effective light-receiving area by one line.
This deviation of timing is progressively increased with the sequential shifting of the effective light-receiving area for a specific intended pixel. Specifically, in the horizontal read method described above, each time the effective light-receiving area shifts, the read time of one line is added to the deviation of timing. As a result, the time lag increases from one line in maximum for the first read session to 31 lines at the 32nd read session.
In the case where the barycenter calculation is performed based on the data on a specific intended pixel in a plurality of effective light-receiving areas, therefore, the calculation result contains a considerable error and becomes unreliable. For the calculation result to be reliable, the correction process may be necessary for reducing the deviation of timing, such process is very complicated.
Also, in the three-dimensional measuring system described above, the amount of slit light received by the image sensing device varies with the reflectance of an object. As long as the electric charge accumulation time of the image sensing device is fixed to a predetermined time, therefore, the output of the image sensing device is saturated for a high reflectance. Conversely, a low reflectance poses the problem of an inferior S/N ratio due to an excessively low output of the image sensing device.
For determining a three-dimensional shape (distance distribution) of an object, on the other hand, it is necessary to accurately detect the receiving position or the receiving timing of slit light. For this reason, as described above, the barycenter calculation is performed based on the sensor output produced before and after a particular position or time point. The barycenter calculation requires an accurate output of the image sensing device. In the case of an inferior S/N ratio as described above or in the case where the output of the image sensing device is saturated, an accurate barycenter calculation is difficult.
To solve this problem in the prior art, a slit light beam is projected once to produce an output of the image sensing device so as to set an appropriate accumulation time. The accumulation time thus set, however, is shared by all the pixels of the image sensing device, and therefore the problem described above still remains unsolved in the case where the reflectance of the object is partially low or high.
Also, in the three-dimensional measuring system described above, an object may decrease in size extremely as compared with the visual field of image sensing. The size of the object with respect to the visual field is determined by the image sensing angle of view and the image sensing distance. The angle of view can be adjusted by a zoom function but has a minimum value determined by the zoom specification. The image sensing distance also has a tolerance so that the image sensing at closer than a minimum distance (generally, several tens of centimeters to one meter) is impossible.
Conventionally, although the read range of the image sensing at each sampling in each main scanning is a part of the image sensing surface, the whole image sensing surface represents the read range in the context of the whole scanning period. In the case where an object is excessively small as compared with the visual field as mentioned above, therefore, the problem is that the unrequired data represents a large proportion of the data obtained by sampling, i.e. the data read operation from the image sensing device and subsequent data processing are very inefficient.