A traditional camera captures a projection of the light field into a two-dimensional sensor plane. The image represents the intensity of light falling on each photosite in the sensor, and all angular information about the light is lost. For example, as shown in FIG. 1A, a traditional camera is focused such that one point 100 on the front focal plane 102 corresponds to one point 104 on the sensor plane 106. The light intensity from points 108 and 110 on other planes is spread over a larger area 112 (i.e., multiple photosites) on the sensor 106, causing objects in other planes to appear blurry. Points 108 and 110 in front of or behind the front focal plane can produce overlapping intensity distributions, and sharp focus is difficult to recover after the image has been captured. However, if the angular information of the incident light at the sensors could be preserved (i.e., capturing the 4D light field), the position of points not sitting in the front focal plane could be computed, and the image could be refocused after the fact. In addition, multiple views of the same scene could be reconstructed from a single exposure for use in 3D images and multi-perspective panoramas. Such techniques are useful in a number of commercial systems both for scientific imaging and consumer applications.
A number of modifications to traditional cameras have been proposed to capture the 4D light field in order to enable these novel photographic techniques. These include multiple lenses that offer slightly different views of a scene, stacked gratings above the image sensor that use Talbot pattern shifts to determine the angle of incidence of light from changes in light intensity on a photosite, and microlenses at the camera's rear focal plane that separate light from different angles onto different photosites. FIG. 1B illustrates this last approach, where incident rays 124 upon the sensor are focused on different photosites 122 by microlenses 120 just in front of the sensor plane.
In all of these cases, the cost of capturing additional angular information from the light field is a reduction in the number of pixels in the final image, a reduction in the amount of light falling on each photosite for a given exposure time, or both. More specifically, light-field cameras that use microlenses face the challenges of precise alignment of the microlenses with the sensor and aperture matching between the camera main lens and the microlens array. Current light field cameras face a trade-off between directional resolution and spatial resolution, and all designs suffer from slower shutter speeds due to light loss. The need for light-field sensors with high sensitivity and high photosite density is clear.