State of the art approaches to camera array calibration currently include various techniques such as, for example, using a two dimensional (2D) planar object to calibrate sets of cameras that can see the whole 2D surface. This technique is common for calibrating stereo pairs. Another camera array calibration technique uses a one dimensional (1D) object such as a light to calibrate an array of sensors/cameras, provided that all cameras can see the light. Yet another calibration method includes using Structure from Motion (SFM), a process of finding the three-dimensional structure of an object by analyzing local motion signals over time, for both fixed and moving cameras.
Accurate calibration for all sensors capturing a scene to create a three dimensional spatial video such as, for example, a Free Viewpoint Video (FVV), is important for achieving realistic depictions of synthetic scenes created from the sensor data. FVV is created from images captured by multiple cameras viewing a scene from different viewpoints and allows a user to look at a scene from synthetic viewpoints that are created from the captured images and to navigate around the scene.