Reflection of light is observed when visible electromagnetic waves encounter a surface that does not absorb all of their radiative energy and repulses some thereof. When the imperfections of a surface are smaller than the wavelength of the incident light (this is the case of a mirror, for example), all of the light is reflected specularly (i.e. the angle of reflection of the light is equal to its angle of incidence). This effect causes, in images, specular spots, i.e. visible elements, on many materials.
In general, these specularities are not taken into account in machine-vision algorithms because these algorithms are highly dependent on viewpoint, on the materials present in the scene, on the settings of the video camera (exposure time, aperture) and on the geometry of the scene. These specularities are also dependent on the shape of the light source (bulb, fluorescent tube).
Now, these specularities are of major interest in various contexts. For augmented reality (AR) applications such as sales aids, the insertion of virtual elements into a real scene must be as natural as possible in order to allow an optimal user experience. Specifically, this insertion must be stable and include optical artefacts (the luminous context of the scene) such as specularities and shadows, elements that are essential for a realistic appearance.
Moreover, regarding the problem of real-time video-camera localization, current algorithms may be disrupted by specularities. Specifically, these algorithms are based on temporal tracking of primitives (generally points of interest). The latter may be completely or partially occulted by specularities. Current methods limit, to a certain extent, the problem by using robust estimation algorithms, which then consider these zones of the image to be noise. Nevertheless, for certain viewpoints, these specularities may saturate the video camera and cause the localization algorithms to fail. Ideally, these specularities should be considered as primitives in their own right, able to provide a great deal of additional information for reinforcing these localization algorithms.
These optical artefacts may also play an important role in the understanding and modelling of the behavior of light in a scene. Specifically, it is possible to deduce from these specularities the geometry of a surface on which they occur. These specularities may thus be used for quality-control applications in order to verify the integrity of a 3D surface during the manufacture of industrial parts.
The prediction of these optical artefacts and in particular the prediction of specularities is of major interest in the aforementioned applications. However, it is a difficult problem because specularities depend on the viewpoint of the observer, on the position, shape and intensity of the primary light sources and on the reflectance properties of the materials on which they occur.
The state-of-the-art regarding the estimation of light on the basis of images may be divided into two categories: the modelling of overall illumination and the reconstruction of primary sources.
Light is an essential element to the formation of an image. Specifically, the latter is the result of the interaction between light sources and objects of various materials for a given sensor (eye, video camera). This light is emitted by one or more light sources, which may be categorized into two categories:                Primary sources corresponding to bodies that produce the light that they emit. This category includes bodies having a very high temperature such as the sun, flames, incandescent embers or even a filament of an incandescent lamp.        Secondary or scattering sources corresponding to bodies that do not produce light but that redirect received light. Scattering is an effect in which a body, associated with a material, having received light, partially or completely redirects it in every direction. The amount of light scattered depends on the properties of the materials of the objects receiving the light. A scattering object is therefore a light source only when it is itself illuminated by a primary source or by another scattering object.        
In their approach to modelling overall illumination, Jacknik et. al. propose in the publication “Real-Time Surface Light-field Capture for Augmentation of Planar Specular Surface” ISMAR 2012, an indirect reconstruction of overall illumination, taking the form of a map of the luminous environment generated on the basis of all the images of a video sequence. This reconstruction is used to achieve a realistic photo-finish after a phase of initialization on a planar surface made of a specular material. However, this method is limited because it makes no distinction between primary and secondary sources. Therefore, this method does not allow specularities from unknown viewpoints to be predicted.
The approach to estimating overall illumination of Meilland et. al. described in the publication “3D High Dynamic Range Dense Visual SLAM and its Application to Real-Time Object Re-Lighting” ISMAR 2013, presents a reconstruction of a primary (point) source achieved by directly filming light in the scene. However, this method is unsuitable for more complex types of lighting such as fluorescent tubes represented by a set of point sources. In addition, dynamic light (turned on, turned off) cannot be managed and, with regard to specularities, materials are not taken into account. Therefore, this method does not allow specularities from unknown viewpoints to be predicted.
The reconstruction of primary sources according to the method of Lagger et. al. described in the publication “Using Specularities to Recover Multiple Light Sources in the Presence of Texture” ICPR 2006, presents a reconstruction of the direction of the primary source on the basis of specularities on a moving object observed from a stationary viewpoint. This application is limited; specifically, as specularities depend on viewpoint, it is necessary to re-estimate them in each position. In addition, neither the position, nor the shape of the light source are estimated and material is not taken into account. Therefore, this method does not allow specularities from unknown viewpoints to be predicted.
The approach of Boom et. al. described in the publication “point Light Source Estimation based on Scenes Recorded by a RGB-D camera” BMVC 2013, details a method for estimating a primary light source that is considered to be point-like on the basis of a Lam bertian (nonreflective) surface, using a RGB-D (Red Green Blue-Depth) sensor. This approach only uses the diffuse component to estimate the point source, a synthesized appearance being compared with the actual scene in order to approach the actual scene as best as possible. However, this method is not suitable for real-time application, and cannot handle fluorescent tubes but only point sources, nor can it manage the presence of specularities (assumption of Lambertian surfaces). In addition, this method is limited to one light source and its shape is not estimated; furthermore, the specular component of materials is not taken into account. This method does not allow specularities from unknown viewpoints to be predicted.
Regarding the prediction of specularities, there is no known method.
Therefore, there remains to this day a need for a method for processing images containing specularities that optionally allows specularities from other viewpoints to be predicted.