A plenoptic camera, sometimes referred to as a light-field camera, typically includes an array of micro-lenses located proximate the focal plane of the camera. This feature of the plenoptic camera allows it to capture the light field of a scene. With the aid of a computer, a user can post-process the light field captured by the plenoptic camera to reconstruct images of the scene from different points of view. Further, the user can also change the focus point of the images captured by the plenoptic camera.
Compared to a conventional camera, the plenoptic camera includes extra optical components, (i.e., the micro-lens array), which enables the camera to achieve the goals mentioned above. There presently exist at least two different types of plenoptic cameras. A first type of plenoptic camera, as exemplified by the plenoptic camera manufactured by Lytro, Inc., Mountain View, Calif. USA, has its array or micro-lenses located one focal length from the camera image sensor. All the micro-lenses in the array have the same focal length. This micro-lens configuration affords a maximum angular resolution but a low spatial resolution. The second type of plenoptic camera, as exemplified by the plenoptic camera manufactured by Raytrix GmbH, Kiel, Germany, has a micro-lens array with three types of micro-lenses. This type of plenoptic camera is characterized by the fact that the image of the main lens does not form onto the micro-lenses, but onto a surface in the air. This surface is then set as the object, which is then imaged on the sensor by the micro-lens array. The three different types of micro-lenses provide a bigger depth of field as compared to a micro-lens array having the same kind of micro-lenses. This type of plenoptic camera sacrifices angular resolution for better spatial resolution because the micro-lenses are focused on the main image, getting more spatial resolution, and less angular resolution.
Many present-day plenoptic cameras choose to array micro-lenses in the array in hexagonal arrangement, although a Cartesian grid could also work. A Bayer-pattern color filter filters light incident on the individual light-sending elements of the camera image sensor, thereby enabling the camera image sensor to capture color information in a roughly sampled image. This sampled image contains small sub-images formed under each micro-lens. The sub-image formed under each micro-lens actually becomes the sampled image of the exit pupil of the main camera lens seen by that micro-lens. This sub-image contains angular information of the light field. Concatenating the pixels taken from a fixed position under each micro-lens (i.e., the same pixel position in the sub-images) yields an image of the captured scene from a certain viewpoint. Hereinafter, the term “view de-multiplexing” will refer to the process of extracting the pixels to form an image of the captured scene from the particular viewpoint.
With the Bayer color filter positioned in front of the camera image sensor, the resultant captured image can undergo de-mosaicking after de-multiplexing the views. Considering the fact that the pixels under each micro-lens contain the information from different positions of the scene, de-mosaicking of such images (the raw data) yields little useful information and suffers from view crosstalk. The hexagonal arrangement of the micro-lenses result in patterns that suffer from irregularity and severely monochromaticism, i.e., the color sampling of the scene suffers from big spatial gaps between the samples.
To perform the de-mosaicking of the de-multiplexed view, a processor will pre-process the captured image to obtain the information of three channels in every neighborhood of the view. This pre-processing includes calculating disparity maps that guide the de-mosaicking algorithm. In practice, however, the results of such preprocessing yield much lower quality than the de-mosaicking of the raw data.
Thus, a need exists for an improved plenoptic camera that does not suffer from at least one of the aforementioned disadvantages.