This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present disclosure that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present invention. Accordingly, it should be understood that these statements are to be read in this light, and not as admission of prior art.
Conventional image capture devices project a three-dimensional scene onto a two-dimensional sensor. During operation, a conventional capture device captures a two-dimensional (2-D) image of the scene representing an amount of light that reaches a photosensor (or photodetector) within the device. However, this 2-D image contains no information about the directional distribution of the light rays that reach the photosensor (which may be referred to as the light-field). Direction of incoming light, for example, is lost during such 2D acquisition and information like depth cannot be recovered for a single system. Thus, a conventional capture device does not store most of the information about the light distribution from the scene.
Light-field capture devices (also referred to as “light-field data acquisition devices”) have been designed to measure a four-dimensional (4D) light-field of the scene by capturing the light from different viewpoints of that scene. Thus, by measuring the amount of light traveling along each beam of light that intersects the photosensor, these devices can capture additional optical information (information about the directional distribution of the bundle of light rays) for providing new imaging applications by post-processing. The information acquired/obtained by a light-field capture device is referred to as the light-field data. Light-field capture devices are defined herein as any devices that are capable of capturing light-field data. There are several types of light-field capture devices, among which:                plenoptic devices, which use a microlens array placed between the image sensor and the main lens, as described in document US 2013/0222633;        a camera array.        
The light field data may also be simulated with Computer Generated Imagery (CGI), from a series of 2-D images of a scene each taken from a different viewpoint by the use of a conventional handheld camera.
Light-field data processing comprises notably, but is not limited to, generating refocused images of a scene, generating perspective views of a scene, generating depth maps of a scene, generating extended depth of field (EDOF) images, generating stereoscopic images, and/or any combination of these.
Hence, among others, a 4D Light-Field (4DLF) allows computing various re-focused images with adjustable depth-of-field, focalization distances and viewing positions. However, user experience is often limited to simple rendering on TVs or monitors, 2D computers and mobile displays.
More generally, current light field editing techniques are limited to changing perspective or focus. However, as the number of captured and shared light fields increases, there is an increasing need for editing tools, which would offer the same functions as the well-established editing of 2D images. Actually, image editing programs like Adobe Photoshop® for example provide ways of modifying an object's appearance in a single image by manipulating the pixels of that image.
Nonetheless, the multidimensional nature of light fields may make common image editing tasks become complex in light field space.
First, because a light field is a four-dimensional data structure, while most of the existing editing tools and displays are designed for two-dimensional content. Then, because light fields are redundant and associated with multiple views: the editing task performed on one of these multiple views must be propagated in all views for consistency purpose, which is both cumbersome and time-consuming.
In “Plenoptic image editing”, Proceedings of IEEE 6th International Conference on Computer Vision, 1998, Seitz and Kutulakos presented a method of interactive image editing operations designed to maintain consistency between multiple images of a physical 3D scene: edits to any one image propagates automatically to all other images, as if the 3D scene had itself been modified. Hence, the user can quickly modify many images by editing just a few. Propagation to the other images relies on the use of a plenoptic decomposition into separate shape and radiance components. The propagation mechanism relies on voxel-based reconstruction to obtain pixel correspondence information.
Such a technique focuses on circular 360° light fields, which are either acquired by rotating the camera around the scene, or by rotating the object to be captured (see FIG. 4). Hence, such a technique cannot be directly applied to planar light fields, which are either acquired by a camera array or a plenoptic device.
Moreover, according to this technique, a user needs to navigate through different views of the scene to fully edit the light field, and handle occluded areas: in other words, for some edits which concern occluded areas, the user needs to perform editing on several different images of the scene, in order for it to be propagated to all views of the scene.
This is both cumbersome and time-consuming.
It would be desirable to provide a technique for robustly selecting a surface, or material, in a light field that would show improvements over the prior art. Notably, it would be desirable to provide such a technique, which would enable a user to select materials in a light field without the need to navigate through multiple views, in order to edit them.