1. Field of the Invention
Generally speaking, the sphere of application of the present invention is that of advanced driving support systems, and more particularly, of that of the detection and identification of obstacles, in particular other vehicles either crossing the vehicle's path or travelling in front of it.
2. Description of the Related Art
The detection and identification of obstacles is one of the most researched and key spheres in the field of research into advanced driving support systems introduced during the last fifteen years, in particular by manufacturers of motor vehicles and motor vehicle equipment. Numerous solutions have been envisaged and put into practice for the detection and identification of obstacles from within a motor vehicle, all of which are essentially based on radar, lidar and ultrasound technologies for close and low speed obstacles and also on processes involving the use of devices comprising cameras.
Radar and lidar sensors are typically used for long distance detection such as the ACC system (Autonomous Cruise Control), which, for example, can directly provide information on the position and in some cases on the 3D speed of surrounding objects. This information allows the different objects to be assessed and to relate useful information to them such as their positions and even their speeds, together with their current coordinates in three dimensions, in relation to the vehicle fitted with the sensor device and also in relation to a fixed reference point if the position of the vehicle is properly established with regard to this reference point.
But sensors of this type do not offer a wide range of vision and their angular positioning is never very precise. Moreover, they do not provide any information on the highway environment, such as the position of the vehicle within its traffic lane, the number of traffic lanes, the trajectory of the route, the classification of the obstacles, the possibility of recognizing infrastructural elements such as roadside panels, etc. Moreover, long range, narrow fields are unable to detect at a sufficiently early stage so-called <<cut-in >> scenarios, where a vehicle arriving from a different traffic lane to that of the vehicle fitted with the sensor threatens to <<cut in >> and the driver has to be responsible for controlling the situation.
Methods for detecting obstacles by using on-board cameras can resolve such problems, which are otherwise beyond the capabilities of radar or lidar systems. The information provided by a camera comes in the form of a bi-dimensional image (2D), produced in general by the projection of a typical perspective of the real three-dimensional (3D) world onto the image plane.
The methods that can be used for the detection of obstacles by the use of a camera can be divided into three main categories: the recognition of a two-dimensional shape, the recovery of three-dimensional information by interpreting the movement in monovision—in other words, from the movement captured by a single camera—and triangulation in stereovision. This last category is the most satisfactory in terms of the quality of the results obtained, as it provides relatively precise information on the distance from an obstacle.
Furthermore, the different types of sensors in the form of cameras used up to now in motor vehicles to carry out driving support functions can be broken down into two main groups, both with advantages and disadvantages:                grey levels cameras also known as black and white cameras;        color cameras.        
A color camera, can, in particular, aid the identification and recognition of rear lights. In this way, the extent of the detection is increased as a result of the information provided in color. Conventionally, this type of camera is a grey levels camera, but a filter, known as a Bayer filter, is arranged in front of it, which enables the red, green and blue (RGB) components of each captured image to be calculated. However, these filters have the effect of reducing the spatial resolution of the sensor device used with the result that the calculation of the distance—often effected by measuring the distance between two projectors and a detected vehicle—obtained by the use of a single color sensor is less precise than a calculation carried out by a single grey levels sensor. In certain applications for driving support, in particular the BeamAtic™ function, this information is of crucial importance because the operation of switching the headlamp beam depends directly on the distance at which oncoming or followed vehicles are detected. However, above a certain distance, a grey levels sensor cannot enable the tail lamps of one vehicle to be distinguished from those of another and as it does not receive any color information it cannot enable tail lamps to be detected at all. For the grey levels sensors currently in use, this maximum distance is 450 meters. The choice between a color sensor and a grey level sensor is therefore a compromise between the scope of the detection and the accuracy of the assessment distance.
The use of stereoscopic vision systems, regardless of whether the two cameras used are color cameras or grey levels cameras, enables the distance from detected objects to be accurately calculated so that the BeamAtic™ function is carried out in a satisfactory manner. Generally speaking, certain functions will require a higher level of resolution than others or alternatively a less wide field of vision. But the presence of two cameras for a dedicated function represents a major cost item, with the result that this solution is often prohibitive.
There is, therefore, a need to provide a system and method that overcomes one or more problems of the past.