1. Field of the Invention
This invention relates generally to the reconstruction of three-dimensional scenes, for example the reconstruction of such scenes from images captured by a plenoptic imaging system.
2. Description of the Related Art
Light fields have been first introduced in the computer graphics community for representing three-dimensional scenes via multiple views of that scene taken from different viewpoints. In general, the light field of a scene is a seven-dimensional function that contains two-dimensional images (i.e., light field images) of the scene taken from any viewpoint in three-dimensional space, at any wavelength and any time instant. In computer graphics applications, a computer can render the scene from any viewpoint because it has the explicit three-dimensional scene model, including its three-dimensional shape and texture. That is, the computer can render any of the light field images and therefore can also calculate the entire light field of the scene.
Recently, systems have been developed for capturing a four-dimensional light field of three-dimensional scenes. These systems include camera arrays and plenoptic imaging systems. These systems typically capture a four-dimensional light field: two-dimensional images of a scene taken from any viewpoint on a two-dimensional surface (rather than allowing any viewpoint in three-dimensional space), at a certain wavelength (or wavelength band) and time instant. In these systems, the three-dimensional scene information is not explicitly captured. Rather, it is implicitly contained within the pixels of the captured four-dimensional light field. Extracting three-dimensional information from the four-dimensional light field then becomes an inverse problem, which is ill-posed when occlusions are present.
Furthermore, resolving the ambiguities introduced by occlusions and segmenting a light field (or the underlying three-dimensional scene) into components at different depth layers is often a necessary step for subsequent light field processing methods, such as digital filtering of scene parts or synthesis of virtual views from the multi-view image stack. If the scene layers are not properly segmented, the application of classical image processing methods may introduce artifacts around object boundaries.
Thus, there is a need for approaches to handle occlusions when reconstructing three-dimensional scenes from light field data.