Vision sensors are widely used due to their advantages of wide measurement range, non-contact character, high response speed, system flexibility and measuring accuracy, and so on. When we need to measure objects in a greater area, a plurality of vision sensors, each with a limited field of view, can be combined to construct a multi-vision sensor system, which allows a larger measuring range and yet a higher measuring accuracy. This kind of measurement system, as a rule, is referred to as “Multi-Sensor Vision Measurement System (MSVMS).”
Generally, in an MSVMS, sensors are placed far away from one another, because there is not any common field of view among them. As a result, global calibration has become a key factor in applying multiple vision sensors to measurements. To be specific, the position relationships among these sensors should be obtained and then unified under a single coordinate frame.
At present, there are three commonly used versions of global calibration method directed to an MSVMS: method based on homonymic coordinates unity, method based on intermediary coordinates unity, and method based on unique global coordinate unity. A specific explanation for these methods is provided as follows.
The method based on homonymic coordinates unity is to, for each of the multiple vision sensors, compute the rotation matrix and translate vectors from its local sensor frame to a global coordinate frame with a group of homonymic coordinates.
The method based on intermediary coordinates unity is to, for each sensor in the multiple sensor vision system, we unify its local coordinate frame into a global coordinate frame by concatenating transformations through several accessorial coordinate frames.
The method based on unique global coordinate unity is to carry out a local calibration for each of the multiple vision sensors in the system in a measuring condition, by directly using coordinates of feature points under one global coordinate frame, thus establishing a transformation from its local coordinate frame to the global coordinate frame.
The three above-mentioned methods, however, have one common disadvantage of their strongly relying on high-accuracy measuring equipments such as theodolite pairs, laser trackers and etc. However, there are “blind calibration areas” due to the limited working space and the intrinsic restrictions of the large measuring apparatus. Additional, these methods all need many times of coordinates transformations, which yields a decrease in the calibration accuracy.
In 2005, Zhang et al. propose a global calibration method based on planar targets. This method uses the fixed pose relationship between the feature points on the targets separately as a constraint to compute the transformation matrix between the two vision sensors of non-overlapping field of view, and it is also effective for the global calibration in wide area. More important, this method avoids vast computations on many times of coordinate transformations and yields higher calibration accuracy. However, it needs the planar target of huge size for the calibration. But using and machining such targets are so difficult that the global calibration method proposed by Zhang is not suitable for calibrating a multi-sensor vision measurement system of large working space.