The contexts of application of the invention are in particular:                The analysis and detection of dominant structures (ground, walls, etc.).        The segmentation of the scene.        The detection of 3D obstacles.        
Nevertheless, it would be wrong to consider the use of orientations to be limited to these applications. Orientations may be used in any system for analyzing, on the basis of 3D data, scenes in real-time.
The computation of orientations is a field explored in machine vision and image processing, but essentially for images representing 3D information.
There are different types of 3D vision system:
Systems of the 3D scanner or time-of-flight (TOF) camera type: this type of 3D sensor delivers a depth map in which each pixel corresponds to the distance between a point of the scene and a specific point. The depth maps obtained are generally quite precise, but they nevertheless contain aberrations (e.g. speckle for TOF images). The cost of these systems is high (one or more thousands of dollars), limiting their use to applications where cost is not a great constraint. In addition, many of these 3D sensors cannot be used in real-time applications, because of the low frequency of the images; and
systems of the stereoscopic type, which are generally composed of an array of cameras and/or projectors that are associated with specific processing operations (disparity computation). They are suitable for real-time applications and their cost is much lower: they cost the same as standard cameras (such cameras are already sometimes already present in other applications, mention being made, by way of example, of “reversing cameras”). In contrast, these images are more noisy (sensitivity to lighting conditions, problem with surfaces with too smooth a texture, etc.), and the depth map deduced from the disparity map is not dense. The non-linear conversion of the disparity map to the depth map leads to the density of information in the depth map being nonuniform. Typically, data closer to the camera are denser, and data at the border of objects may be imprecise.
Existing real-time solutions compute 3D characteristics from 2D media. The following process, which is illustrated in FIG. 1, is common to these methods:
On the basis of a 3D depth or disparity map, a 3D map of points is computed. It will be recalled that a 3D depth or disparity map is a perspective or orthogonal projection of the 3D space onto a 2D plane; it is in fact a question of 2D images with which depth information is associated. A depth z is associated with each pixel of the (x, y) plane of the image: p(x,y)=z. On the basis of p(x,y) and of z, the X, Y and Z coordinates of the corresponding point of the scene are obtained by processing these data in a way known to those skilled in the art. After processing all of the pixels, a 3D point cloud is obtained the coordinates in the scene of which are represented by a 2D image for each of the X, Y and Z components of the scene: a 2D image in which each pixel is associated with its X coordinate, a 2D image in which each pixel is associated with its Y coordinate, and the a 2D image in which each pixel is associated with its Z coordinate;
characteristics are computed in each of the three 2D images or in a combination of these images: for each pixel, statistical characteristics (averages, correlations, etc.) are computed using the 3D map of points;
integral images associated with these characteristics are computed and stored in memory: an integral image is computed for each characteristic. It will be recalled that an integral image is an intermediate image of the same dimensions as the 3D map of points and such that the value associated with each pixel is equal to the sum of the values of the preceding pixels; and
3D orientation is computed: for each pixel or super pixel (=a group of a few pixels), a rectangular 2D zone centered on the pixel is defined. Characteristics are computed locally in this rectangular zone using the integral images. To compute each characteristic (defined by the sum of the values in this rectangular zone) the corresponding map need be accessed only four times, once per vertex of the rectangular zone. Using the characteristics computed on this rectangle, a principal component analysis is carried out, and the orientation is given by the eigenvector associated with the smallest eigenvalue.
By way of example of such a stereoscopic system, mention may be made of that described in the publication “Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification” by Dargazany Aras and Karsten Berns [Dar2014], which was published in 2014. The authors present a system for analyzing terrain and helping unmanned ground vehicles to navigate (and, more particularly, to analyze the traversability of terrain). The reformulated problem is divided into two main problems: surface detection and analysis. The proposed approach uses a stereoscopic system to generate a dense point cloud from a disparity map.
In order to segment the elements of the scene into a plurality of surfaces (which are represented by superpixels), the authors propose to apply an image segmentation method to the point cloud, this method employing 3D orientations.
To analyze the surfaces of the scene, the authors introduce a new method called SSA (Superpixel Surface Analysis). It is based on the orientation of the surface, on the estimation of its plane and on the analysis of traversability using the planes of the superpixels. This method is applied to all the detected surfaces (segments of superpixels) in order to classify them depending on their traversability index. To do this, five classes are defined: traversable, semi-traversable, non-traversable, unknown and undecided.
However, the results obtained by this system do not have a uniform precision for each point of the 3D cloud.
Therefore, there remains to this day a need for a method for characterizing a scene by computing the 3D orientation of the observed elements that simultaneously meets all of the aforementioned requirements, in terms of computing time, of obtained precision and of cost of the computing device.