For the exploration of extreme environments, for example other planets or also deep-ocean regions, autonomously acting vehicles are increasingly being used, which must localise themselves in their environment, in order to be able to reach a pre-planned target location. One possibility in this case consists in a method of the previously mentioned type, which is based on a camera-based navigation on the basis of existing topographical maps. Topographical maps of this type also exist for regions that hitherto have not been satisfactorily explored, such as for example the seabed of the deep ocean or planetary surfaces, such as that of the moon and Mars. For the moon and Mars in particular, detailed topographical maps of the entire moon and also for Mars, which contain both photographs and elevation relief and were obtained by means of stereo or laser distance measurements, exist owing to the cartographic missions of the American and Japanese space agencies NASA and JAXA, which were carried out in the past. The quality of the map material is very high and goes down to a resolution of a few meters for regions such as the polar regions of the moon in particular.
A landing vehicle, which should land precisely at a predetermined location, must observe its position and its location constantly during the landing process, in order to be able to correct deviations from a planned path. As there are no current navigation systems outside of the Earth, the navigation must take place in some other manner. For a desired precision of a few hundred meters, a navigation by means of RADAR systems is not possible from the Earth, and a vision-supported navigation on the basis of known features on the planet surface is the only remaining option. In this case, during the imaging of a planetary surface with the aid of a camera, the challenge lies in extracting and recognising features, the spatial locations of which are known with reference to a reference system. The position and location of the camera system with respect to the reference system can then be determined from the correspondence of two-dimensional locations in the camera image and three-dimensional locations in the reference system.
A fundamental principle of already-known methods for navigation on the basis of maps consists in the fact that images of the terrain that is flown over are taken with the aid of a camera system and that visual features are extracted on the basis of the recorded camera images, which features can be recognised in the map material available for the terrain. The particular position and location can be determined from the assignment of the two-dimensional position of the extracted features in the camera image and the 3D coordinates of the recognised features in the map material to one another. In the case of space vehicles it has to this end already been suggested to limit oneself to a determination of the position thereof, as the location thereof can be measured more precisely with the aid of star cameras. Differences between the various known methods consist principally in the choice of features, which are extracted from camera images and which are to be recognised in the map material.
A method developed essentially for navigation on the moon surface, which is described in the article “Advanced Optical Terrain Absolute Navigation for Pinpoint Lunar Landing” by M. Mammarella, M. A. Rodrigalvarez, A. Pizzichini and A. M. Sanchez Montero in “Advances in Aerospace Guidance Navigation and Control”, 2011, on pages 419-430, is based on the recognition of craters. Here, the camera image is investigated using a specifically developed image-processing operator with regards to patterns in the image of elliptical appearance and provided with characteristic shadow casting, and the craters are extracted in the image. At the same time, craters are detected in topographical maps, what are known as digital elevation maps (DEM), of the lunar surface and the 3D coordinates thereof are stored in the lunar coordinates. The assignment of craters of the map to craters detected in the camera image takes place subsequently essentially by means of an analysis of the crater constellation. By recognising craters in different lighting conditions, a topographical map is then sufficient for achieving the navigation task.
In a method termed “LandStel”, which is described inter alia in the article “Vision-Based Absolute Navigation for Descent and Landing” by B. Van Pham, S. Lacroix and M. Devy in “Journal of Field Robotics”, pages 627-647, Volume 29, Issue 4, July 2012, it is not crater recognition, but rather a detector of prominent points in the camera image, which is also termed a Harris operator, that is used. In order to be scale-invariant, height information, for example from an altimeter, is used. The Harris features are then calculated within a scaled image, from which a scaling invariance emerges, without requiring the calculation outlay of a so-called SIFT operator, as is described in the article “Distinctive Image Features from Scale-Invariant Keypoints” by David G. Lowe in the journal “International Journal of Computer Vision”, Volume 60, No 2, pages 91-110, 2004. In this known method, constellations of Harris features that are transformed into a rotation-invariant feature vector, are termed features. These feature vectors are subsequently used for recognising features between the map and the current camera image. In this case, the map contains a photo of the terrain and also a topographical map, in order to determine the 3D coordinates of the corresponding features.
In a further known method, the features are detected with the aid of what is known as the SIFT feature operator and features are likewise compared between photography and the current camera image. The 3D data are taken from a topographical map in this case. In a different known method, small image details around certain points are extracted, which should be recognised by means of correlation operators in maps of the terrain.
In addition to these image-based approaches, methods have also already become known, which suggest depth data for navigation purposes and which are based on the data of a LIDAR (Light Detection and Ranging).
Common to the image-based approaches is the fact that either an attempt is made to develop lighting-independent operators, as are claimed by the crater-based approach, except that maps are used, which already have similar lighting conditions as the images, which are to be expected for navigation.
In addition, concrete methods have also already become known for landing on heavenly bodies. Thus, DE 10 2010 051 561 A1 describes a system for automated landing of unmanned flying objects, which pre-assumes the existence of a ground unit. DE 10 2009 036 518 A1 is concerned with carrying out the landing procedure of a space-travel flying object and in the process describes the actuatorics required for the landing procedure. Furthermore, DE 10 2008 064 712 B4 is concerned with a sensor-assisted landing assistance apparatus between helicopters and a landing platform. DE 195 21 600 A1 suggests an image-assisted navigation system for automatic landing, which is based on equipping the landing area with artificial markers. DE 39 39 731 C2 also assumes that the landing area is equipped with helping markers and additionally suggests using planar depth sensors, such as laser scanners or RADAR. Finally, DE 21 26 688 A also suggests the use of visible markers in a ground station. Only in DE 31 10 691 C2 is a navigation system for a cruise missile presented, which receives pulse trains on the basis of available map materials and an active sensor, for example a laser measuring beam that acquires distance and intensity, present on the flying object and compares these pulse trains with pulse trains, which are created from aerial images manually or automatically. Furthermore, it is suggested there to use a plurality of measuring beams.