Autonomous driving systems must be able to discern objects within images captured by cameras, including humans, other vehicles, and road structures. Of particular importance to the successful control and navigation of autonomous vehicles over roads and through traffic is the ability to identify boundaries of traffic lanes. While traffic lanes are usually demarcated by simple lines and patterns, it is often difficult in practice for autonomous driving systems to identify lane boundaries due to road deterioration, lighting conditions, and confusion with other objects and patterns that may be found in a traffic scene, such as other vehicles or road-side structures.
Detection of traffic lane markings is generally a multifold task involving, on the one hand, low-level feature detection within an image, and, on the other hand, high-level abstractions and models. Therefore, detection usually requires a two-step approach: detecting markings and fitting a lane model. The independent computations of low-level feature detection and high-level modelling are considered necessary because actual physical traffic lane markings are often not well delineated, particularly from the viewpoint of the driver or a vehicle camera. Therefore, existing detection methods attempt to organize candidate markings detected in real time according to pre-defined models based on the underlying structure of known traffic lanes. For example, lanes are assumed to be found in the vicinity of the vehicle and to extend in the direction along which the vehicle moves.
Previous solutions have been built on simplifying assumptions, heuristics, and robust modelling techniques. These assumptions are used to determine candidates for lanes within an image. The candidates are used to construct global lane models, which in turn are used to filter out inappropriate outliers. Filters used are generally robust model selection methods, such as RANdom SAmple Consensus (RANSAC).
However, the separation between detection of candidate markings and construction of lane models limits these existing methods' efficacy. On one hand, without sufficient context, it is often difficult to determine whether a small image patch is the border of a lane. On the other hand, unreliable detection adds to the difficulty faced by the modelling stage, which must filter out false detections and validate the inliers with global consistency. It is helpful to have more robust model selection, but robustness comes with a high computational cost. Furthermore, the more unreliable the detection, the more trials are needed to guarantee a sensible rate of success of modelling. Previous studies utilized distinctive and visually conspicuous lane markings. However, as the task becomes more complex, for example where road markings are not so clearly defined and recognizable, the performance of detectors deteriorates, and the cost of robust modelling becomes more expensive.