Cameras often have non-idealities (e.g., lens distortion, rectangular sensor sizes, and a non-centered optical axis). For many camera-based operations, it is important to be able to calibrate and compensate for these non-idealities to have an accurate mathematical model of the image formation process. This process typically involves estimating the intrinsic camera parameters, such as focal length, aspect ratio of the individual sensors, the skew of the capture plane, and radial lens distortion. In addition, estimates of extrinsic parameters (e.g., the relative positions and orientations of each camera) and spectral/chromatic variations across cameras typically are needed when multiple cameras are used to capture a scene for three-dimensional (3-D) capture.
So-called “strong calibration” involves determining the mathematical relationship between image pixels in any camera to true 3-D coordinates with respect to some world origin. This process typically involves identifying robust and stable features of a known scene (e.g., a checkerboard pattern) with corresponding world coordinate information. The correspondence information then is fed into a nonlinear optimization process that solves for the intrinsic parameters and the extrinsic parameters. A less constrained (“weak”) calibration can be done if the epipolar geometry is to be solved between pairs of cameras. In this process, feature correspondences again are used, but no associated world coordinate information is necessary. These feature correspondences may be used in a nonlinear optimization process to solve for the fundamental matrix that contains geometric information that relates two different viewpoints of the same scene.