To obtain geometric information of an object using a camera, it is required to calibrate the camera by estimating intrinsic parameters between the image information obtained from the camera and the actual geometric information of the object. FIG. 1 illustrates an image of a 3D artificial calibration object model in a conventional method of calibrating a camera using 3D calibration object and FIG. 2 is an image of a plane model in a conventional method of calibrating a camera using coordinates of points on a two-dimensional plane.
Conventional methods for calibrating a camera includes: (1) a calibration method using a 3D calibration object, (2) a self calibration method, and (3) a calibration method using coordinates of points on a two-dimensional plane.
A first calibration method, widely used so far, is one to calibrate the camera using a 3D artificial calibration object, such as a rectangular parallelepiped calibration object, as shown in FIG. 1. As can been seen from FIG. 1, a photograph of the rectangular calibration object is taken to obtain a geometric relation of the rectangular parallelepiped object.
However, in the first method of calibrating a camera using a 3D artificial calibration object, it is difficult to manufacture and maintain the calibration object of a rectangular parallelepiped. This is because the calibration object should have the characteristic of a normal rectangular parallelepiped in order to calculate the intrinsic parameters of the camera from its image. In other words, it must be a rectangular parallelepiped in which three plane and twelve edges must maintain a right angle with respect to a vertex. Otherwise, exact calculation of the camera's intrinsic parameters become difficult and the reliability of calibration is degraded accordingly.
A second self calibration method is one to calculate the intrinsic parameters of the camera using only information of each of corresponding points from several sheets of images. Although this self calibration method can be widely applied without the limitation of using an artificial calibration object, it is difficult to exactly define the corresponding points. Due to this, this method has problems that the process of finding a solution is very complicated and it is also difficult to find a correct solution.
A third method, as shown in FIG. 2, is one of calibrating the coordinates of the points on a plane. As shown in FIG. 2, this method estimates the intrinsic parameters of the camera by taking several images of the plane pattern to exactly find the coordinates of the points on a plane and the coordinates of the corresponding points on a corresponding image plane.
In the mean time, if a circle image is taken by a camera such that a principal point and a focal length are given, a method for calculating a distance to a central point of the circle from the camera has been studied. For example, a paper by K. Kanatani and W. Liu, “3D Interpretation of Conics and Orthogonality”, CVGIP: Image Understanding, vol. 58, No. 3, November. Pp. 286–301, 1993, describes such method (See Equations 57 and 58 of the above paper). Using rough initial guesses for the intrinsic parameters with some knowledge about cameras such as sensor cell size, the circle pose in 3D space can be estimated. Using the circle pose estimation algorithm described in the above-mentioned paper, the 3-D position of the circle supporting plane, normal vector and distance, denoted by {overscore (n)} and d can be found.
For this method of using the coordinates of the points on a plane, one can manufacture and maintain a 2D plane pattern easier than the method of using a 3D artificial calibration object. However, this method is very complicated since it must use a plurality of calibration points of the images of a plane and compare them to each other.