With development of digital image technologies, especially drone technologies, it has become a key technology to enable automatic Simultaneous Localization And Mapping (SLAM) for both indoor and outdoor applications. However, accurate SLAM has always been a difficult issue.
SLAM deals with a computational problem of constructing or updating a map of an unfamiliar environment while simultaneously keeping track of an agent's local with it. A typical SLAM deals with constructing stereoscopic frames and connecting the frames to form a continuous map.
Presently, mainstream SLAM systems usually use monocular cameras or binocular cameras to realize stereoscopic environment mapping and positioning functionalities. Some newly developed research methods use depth cameras, i.e. replacing the monocular or binocular cameras with structured light or a Time Of Flight (TOF) depth sensor. However, these approaches are greatly limited by their present applicable conditions and their relatively high costs.
In addition, some present mainstream SLAM approaches are normally implemented by combining with vision measurements. Present popular vision measuring methods often use passive monocular or binocular lenses. Similar to a pair of human eyes, the basic principle of vision measurement is to use the disparity of different vision angles to calculate a three dimensional structure of the object of interest, and to realize positioning. In case of the monocular lens, the disparity can only be generated by translations; therefore, its ability to detect a three dimensional environment depends on its motion characteristics, which ability is relatively limited.
The disparity of a binocular camera is created by a baseline between the first lens and the second lens; therefore, its ability of image stereoscopic environment is closely related with the length of the baseline. When the length is fixed, an object of too close or too far is undetectable because of existence of blind spots. Along with the development of active depth sensor (structured light, TOF), the SLAM based on active depth sensor is a new research hotspot. However, due to limitations of their performances, the design structures and the cost of those active sensors, their applicable conditions are mostly small scale indoor scenes. For example, because of the full spectrum of the sunlight, a depth sensor based on structured light is not applicable outdoors. Furthermore, TOF depth sensors, when used outdoors, depend on relatively strong energy of certain emitting light and relatively sophisticated sensor design etc.; therefore, they are not suitable for small scale flying platforms.
On the other hand, due to the limitations of their physical principles, it is difficult to use some other technologies precisely in SLAM systems, e.g. Inertial Measurement Unit (IMU), barometer and other sensors to construct accurate and widely applicable SLAM systems.
In view of the foregoing, there is a need for SLAM systems and methods that are precise and more practical under various conditions.
It should be noted that the figures are not drawn to scale and that elements of similar structures or functions are generally represented by like reference numerals for illustrative purposes throughout the figures. It also should be noted that the figures are only intended to facilitate the description of the embodiments. The figures do not illustrate every aspect of the described embodiments and do not limit the scope of the present disclosure.