This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present invention that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present invention. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
Conventional image capture devices render a three-dimensional scene onto a two-dimensional sensor. During operation, a conventional capture device captures a two-dimensional (2-D) image representing an amount of light that reaches each point on a photo-sensor (or photo-detector) within the device. However, this 2-D image contains no information about the directional distribution of the light rays that reach the photo-sensor (may be referred to as the light-field). Depth, for example, is lost during the acquisition. Thus, a conventional capture device does not store most of the information about the light distribution from the scene.
Light-field capture devices (also referred to as “light-field data acquisition devices”) have been designed to measure a four-dimensional (4D) light-field of the scene by capturing the light from different viewpoints of that scene. Thus, by measuring the amount of light traveling along each beam of light that intersects the photo-sensor, these devices can capture additional optical information (information about the directional distribution of the light rays) for providing new imaging applications by post-processing. The information acquired/obtained by a light-field capture device is referred to as the light-field data. Light-field capture devices are defined herein as any devices that are capable of capturing light-field data.
Light-field data processing comprises notably, but is not limited to, generating refocused images of a scene, generating perspective views of a scene, generating depth maps of a scene, generating extended depth of field (EDOF) images, generating stereoscopic images, and/or any combination of these.
Among the several types of light-field capture devices disclosed in background art, the plenoptic devices use a micro-lens array positioned in the image focal field of the main lens, and before a photo-sensor on which one micro-image per micro-lens is projected. The area of the photo-sensor under each micro-lens is, in the background art, referred to as a macropixel. Thus, the plenoptic device generates one micro-lens image at each macropixel. In this configuration, each macropixel depicts a certain area of the captured scene and each pixel of this macropixel depicts this certain area from the point of view of a certain sub-aperture location on the main lens exit pupil.
The raw image of the scene obtained as a result, also referred to as “output data”, is the sum of all the micro-lens images acquired from respective portions of the photo-sensor. These output data contain the angular information of the light field. Based on these output data, the extraction of an image of the captured scene from a certain point of view, also called “de-multiplexing” in the following description, can be performed by concatenating the output pixels covered by each micro-lens image. This process can also be seen as a data conversion from the 2D raw image to the 4D light-field.
Due to the considerable amount of data generated by plenoptic devices, the compression of light field data remains an important challenge to overcome in computational photography. A few publications of background art, among which U.S. Pat. Nos. 8,228,417B1, 6,476,805B1 and US20090268970A1, describe various processes intended to reduce the size of the light-field obtained after de-multiplexing the raw image. Such compression methods, even if welcomed in late stages of the image processing, do not contribute in any way to reduce the size of the output data acquired following the capture of a scene by a plenoptic device.
Still, the publication US20090268970A1 describes a light-field preprocessing module adapted to reshape a micro-lens image by cropping it into shapes compatible with the blocking scheme of a block-based compression technique (e.g., squares of size 8×8, 16×16 for JPEG). The first main drawback of such a method is the undifferential suppression for each micro-lens image of all the output pixels (and the corresponding information) located out of the cropping perimeter. Such lost information cannot be recovered in a later stage and shall be made up by the implementation of a heavy resource-consuming interpolation process, which tend to further increase the computation load of the light-field image processing.
The disk model of the micro-lens images depends on the intrinsic parameters of the plenoptic device, as well as the position of the micro-lens image on the photo-sensor. For peripheral parts of the sensor, the vignetting of the main lens is non-symmetric. Besides, the position of the micro-lens images moves on the sensor by changing the zoom/focus parameter of the camera. Therefore, in the methods and devices known of the background art, all the captured pixels (also called “input pixels”) have to be stored and transferred to post-processing.
It would hence be desirable to provide a light-field capture device showing improvements of the background art.
Notably, it would be desirable to provide such a device, which would be adapted to reduce the size of the raw image initially stored, while preserving the workable data of the light-field captured, and limiting the computing load of the corresponding processing.