A wide range of electronic devices, including mobile wireless communication devices, personal digital assistants (PDAs), laptop computers, desktop computers, digital cameras, digital recording devices, and the like, employ machine vision techniques to provide versatile imaging capabilities. These capabilities may include functions that assist users in recognizing landmarks, identifying friends and/or strangers, and a variety of other tasks.
Augmented reality (AR) systems have turned to model-based (e.g., 3D model) tracking algorithms or Simultaneous Localization And Mapping (SLAM) algorithms that are based on color or grayscale image data captured by a camera. SLAM algorithms may detect and track aspects of an environment (e.g., landmarks, and target objects) based on identifiable features within the environment. Some SLAM systems may use point-based feature selection for the detection and tracking of aspects/target objects. However, many environments (e.g., man-made environments) have abundant edges conducive for detecting edge-like features. The process of identifying and adding edge-like features to a 3D map of an environment is often referred to as line mapping. Typical line mapping systems, however, produce incomplete 3D lines because they often rely on the generation of at least three keyframes to triangulate one 3D line. The requirement to process three keyframes for one 3D line puts a heavy burden on devices with relatively low processing capability such as mobile devices.