This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present disclosure that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present principles. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
Conventional image capture devices render a three-dimensional scene onto a two-dimensional sensor. During operation, a conventional capture device captures a two-dimensional (2-D) image representative of an amount of light that reaches a photosensor (or photodetector) within the device. However, this 2-D image contains no information about the directional distribution of the light rays (that defines the light fields) that reach the photosensor. Depth, for example, is lost during the acquisition. Thus, a conventional capture device does not store most of the information about the directional light distribution from the scene.
Light field capture devices (also referred to as “light field data acquisition devices”) have been designed to measure a four-dimensional (4D) light field of the scene by capturing the light from different viewpoints of that scene. Thus, by measuring the amount of light traveling along each beam of light that intersects the photosensor, these devices can capture additional optical information (information about the directional distribution of the bundle of light rays) for providing new imaging applications by post-processing. The information acquired/obtained by a light field capture device is referred to as the light field data. Light field capture devices are defined herein as any devices that are capable of capturing light field data. There are several types of light field capture devices, among which:                plenoptic devices, which use a microlens array placed between the image sensor and the main lens, as described in document US 2013/0222633;        a camera array, where each camera images onto its own image sensor.        
The light field data may also be simulated with Computer Generated Imagery (CGI), from a series of 2-D images (called views when two differing images representative of a same scene are captured with different viewing points) of a scene each taken from a different viewpoint by the use of a conventional handheld camera.
Light field data processing comprises notably, but is not limited to, generating refocused images of a scene, generating perspective views of a scene, generating depth maps of a scene, generating extended depth of field (EDOF) images, generating stereoscopic images, and/or any combination of these.
The present disclosure focuses more precisely on light field based image captured by a plenoptic device as illustrated by FIG. 1 and disclosed by R. Ng, et al. in “Light field photography with a hand-held plenoptic camera” Standford University Computer Science Technical Report CSTR 2005-02, no. 11 (April 2005).
Such plenoptic device is composed of a main lens (11), a micro-lens array (12) and a photo-sensor (13). More precisely, the main lens focuses the subject onto (or near) the micro-lens array. The micro-lens array (12) separates the converging rays into an image on the photo-sensor (13) behind it.
State of Arts methods for enriching the video capture experience provided by a plurality of users, as described in US20130222369 consists in the manually selection of a point of interest made by one user of the plurality of users, for example by collecting users feedback.
Then, according to the prior art, each user selects manually through their own device the corresponding point of interest and focus is computationally made on a plane perpendicular to the optical axis of each device passing through this object.
However, such methods of the prior art are not able to take into account the specificities of light field imaging (aka plenoptic data), which records the amount of light (the “radiance”) at some points in space, in some directions. Indeed, such conventional video capture device delivers conventional imaging formats.
It would hence be desirable to provide a technique for exploiting the plurality of views provided by a plurality of plenoptic devices that would not show these drawbacks of the prior art. Notably, it would be desirable to provide such a technique, which would allow a finer rendering of objects of interest of video obtained from light field based images.