Current collision avoidance capabilities for area vehicles, such as drones, have various limitations. For example, collision avoidance methods may be limited to detecting large objects, such as walls, have a slow reaction time, and/or rely on 3D cameras for localizing nearby objects. These capabilities may not be applied for agent-to-agent collision avoidance. A typical scenario where existing methods fail is when two drones are flying in a collision path that is at 90 degrees. This may occur in open space or in areas such as hallways.
Another known method of collision avoidance relies on either external localization or simultaneous localization and mapping (SLAM) algorithms. This method, however, only works after the vehicles have a common map. In addition, real-time SLAM algorithms require powerful computers that are not typically available on drones. SLAM algorithms may also be unreliable in dynamic environments.
Yet another method of collision avoidance is based on sharing sparse visual features to enable relative localization and cooperatively planning of collision avoidance maneuvers. Radio communication is then utilized between neighboring drones to provide such avoidance. Still, the reliance on the radio communication and coordination between individual agents presents difficulties in coordination.