This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present invention that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present invention. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
Conventional image capture devices render a three-dimensional scene onto a two-dimensional sensor. During operation, a conventional capture device captures a two-dimensional (2-D) image representing an amount of light that reaches each point on a photosensor (or photodetector) within the device. However, this 2-D image contains no information about the directional distribution of the light rays that reach the photosensor (may be referred to as the light-field). Depth, for example, is lost during the acquisition. Thus, a conventional capture device does not store most of the information about the light distribution from the scene.
Light-field capture devices (also referred to as “light-field data acquisition devices”) have been designed to measure a four-dimensional (4D) light-field of the scene by capturing the light from different viewpoints of that scene. Thus, by measuring the amount of light traveling along each beam of light that intersects the photosensor, these devices can capture additional optical information (information about the directional distribution of the light rays) for providing new imaging applications by post-processing. The information acquired/obtained by a light-field capture device is referred to as the light-field data. Light-field capture devices are defined herein as any devices that are capable of capturing light-field data.
Light-field data processing comprises notably, but is not limited to, generating refocused images of a scene, generating perspective views of a scene, generating depth maps of a scene, generating extended depth of filed (EDOF) images, generating stereoscopic images, and/or any combination of these.
There are several types of light-field capture device.
A first type of light-field capture device, also referred to as “plenoptic device”, uses a microlens array placed between the image sensor and the main lens, as described in the documents US 2013/0222633 and WO 2013/180192. Such device is capable of sampling light distribution and light directions in a field of light rays emanating from the scene. On the basis of this information, a useful application is that images can be collected with increased focal depth and/or digitally refocused. Moreover, there are several algorithms to generate images from raw light-field data at different focal planes and to estimate depth of field of the scene at multiple positions. However, plenoptic devices suffer from the following disadvantage: the number of microlenses used therein intrinsically limits the effective resolution of these plenoptic devices. The spatial and angular information acquired by the device are therefore limited.
Another type of light-field capture device uses a plurality of independently controlled cameras each with its own lens and image sensor, or an array of cameras that image onto a single shared image sensor (see for example the document WO 2014149403). However, these devices require an extremely accurate arrangement and orientation of cameras, which make their manufacturing often complex and costly.
Another way to capture light-field data is to acquire, by the use of a conventional handheld camera, a series of 2-D images of a scene each taking from a different viewpoint, and processing the images thus captured to obtain light-field data. In this technique, the camera is typically moved by a user in different directions of space and operates to sequentially capture of a set of images that can be then combined to obtain light-field data. However, to obtain exploitable light-field data, the user must, during capture, accurately orient the camera towards a same interest point of the scene, whatever the viewpoint adopted. But there is no means that enables the user to capture a set of images suited for acquiring reliable light-field data. It may be user error resulting in non-exploitable optical information in post-processing.
Thus, it would be interesting to provide a user of a conventional capture device with instructions and guidance to facilitate/ease proper capture operation of the capture device for acquiring images allowing obtaining light-field data.