The traditional calibration method for camera's parameters can roughly categorize into two types: one is building a precise world coordinate system with an elaborately arranged three-dimensional space, and calculating the camera's parameters using space transformation and mapping transformation. The method has advantage such as high accuracy, and the disadvantage is requirement of larger space and expensive labor to arrange the three-dimensional space to practically build up a precise world coordinate system. The other type of the method is matching the similarity of the real world scenario with different angles of view using fixed camera parameters, and the camera parameters can be calculated with limited angles of view. The method has advantage such as needless of working space and labor, and the disadvantage is low accuracy.
However, the automation manufacturers still face challenges with the two methods. The reasons include:
(1) in the stage of rotating the calibration plate by human, the result will be different, and the error will be produce between operations by different people, which is not allowable in the manufacturing stage; and
(2) in order to get a better accuracy, more images of different calibration plates is needed, and using the approximation algorithm of algebra to derive space transformation matrix and mapping matrix allow calculation of camera's parameter. However, under the fixed camera's parameter, the intrinsic parameters of the camera must be unchanged. Too many images of the calibration boards will cause uniformity problems of the camera's intrinsic parameters.