Consider the predicament of a helicopter pilot flying in darkness or low-visibility weather conditions. Equipped with conventional navigation systems today's pilot is forced to fly slowly and find a way through the murk.
Perhaps the helicopter has a modern global positioning system (GPS) receiver on board coupled to a “moving map” display. This system shows the position of the aircraft as a symbol on a two-dimensional electronic map. It provides the same type of situational awareness that the pilot could achieve using a paper map, although much more easily and accurately.
Conventional moving maps do not show altitude above the ground, however. Rapid progress is being made in developing maps that change color as height over the ground changes, but while these systems offer an improvement in situational awareness, they are more suited for avoiding a mountain range than for threading one's way between peaks.
Radar altimetry is an example of an active sensor for detecting height above terrain. A radar altimeter emits radio waves and times how long they take to return to the aircraft after reflecting off the ground. The time delay is converted into distance. Radar altimeters are often used as a numerical aid during a precision approach to a runway. Radar altimeters have limited range, generally point straight down, and offer only one data point for each aircraft position. Radar, lidar and other types of ranging sensors can give a pilot a reasonable “picture” of objects in his immediate vicinity. However, skill and experience are needed to interpret radar images.
A new approach to vehicle control in low visibility is being developed. The concept is to replace the view a pilot sees out his cockpit window, i.e. his conventional vision, with Synthetic Vision, a three-dimensional perspective view of the outside world rendered on artificial display. Synthetic Vision uses computer graphics to show terrain and man-made objects in the same position, orientation and perspective as they would appear if one were looking at them on a bright, clear day.
Of course, the Synthetic Vision system has to “know” where the terrain is and also where the aircraft is. For example, GPS sensors and microelectromechanical gyroscopes and accelerometers are often used to obtain the position and spatial orientation (“attitude”) of an aircraft. The location of terrain and objects on the ground is stored in a database that contains latitude, longitude and altitude (e.g. height above mean sea level) information for the ground, buildings, towers, etc. Clearly the utility of a Synthetic Vision system depends critically on the accuracy and precision of its database.
One source of terrain data for Synthetic Vision is the Shuttle Radar Topography Mission (SRTM) which employed a specially modified radar system that flew onboard the Space Shuttle Endeavour during an 11-day mission in February, 2000. The mission obtained elevation data on a global scale to generate a nearly complete high-resolution digital topographic database of Earth. NASA has released version 2 of the SRTM digital topographic data (also known as the “finished” version). For regions outside the United States the new data set is sampled at 3 arc-seconds, which is 1/1200th of a degree of latitude and longitude, or about 90 meters (295 feet).
Version 2 is the result of a substantial editing effort by the National Geospatial Intelligence Agency and exhibits well-defined water bodies and coastlines and the absence of spikes and wells (single pixel errors), although some areas of missing data (‘voids’) are still present. Therefore, while the SRTM provides an excellent baseline it is still imperative that Synthetic Vision databases be checked for accuracy before pilots depend on them for terrain clearance.
Researcher Maqarten Uijt de Haag has disclosed a database validation system that involves comparing radar altimeter measurements, GPS altitude and a terrain database to find places where the database is inaccurate or unreliable. The altitude of the airplane is measured by GPS and the radar altitude. The difference in those two altitudes should be equal to the terrain elevation stored in the database. If the system finds a significant difference then a warning signal is sent to the pilot. This approach serves the function of monitoring the accuracy and reliability of the terrain database, but it does not update the database with the sensor measurements or morph the terrain skin shown to the pilot in real time on a Synthetic Vision display. (See, for example: M. Uijt de Haag et al., “Terrain Database Integrity Monitoring for Synthetic Vision Systems”, IEEE Transactions on Aerospace and Electronic Systems, vol. 41, p. 386-406, April 2005; A. Vadlamani et al., “Improved Downward-Looking Terrain Database Integrity Monitor and Terrain Navigation,” Proc. IEEE Aerospace Conf., p. 1594-1607, March 2004; and, A. Vadlamani et al., “A 3-D Spatial Integrity Monitor for Terrain Databases,” Proc. 23rd IEEE/AIAA Digital Avionics Systems Conf. (DASC), p. 4.C.2-1-4.C.2-13, October 2004; all of which are incorporated herein by reference.)
Suppose that an elevation database is available for the area in which the rescue helicopter mentioned earlier is flying. In low visibility conditions a Synthetic Vision display showing a three-dimensional computer graphic depiction of nearby terrain would be very valuable. But what level of database reliability does the pilot require in order to believe that when his Synthetic Vision shows no obstacles, there really is nothing out there?
Alternatively, suppose the helicopter is equipped with radar, lidar, or generically, any kind of ranging sensor. Such a system can display images of the outside world as built up from real-time ranging measurements. Although radar systems are helpful, they have fundamental limitations. First of all, radar shows an image of what it sees at the moment, but it doesn't remember what it saw before. The radar transmitter must be pointed in the direction one wants to see. Also, radar does not see around a corner in a valley, or over a ridge. Finally, radar necessarily involves the transmission of high power electromagnetic energy which may be detected by an enemy; or worse provides a homing signal for enemy weapons.
Clearly Synthetic Vision and radar each have limitations that the other could help overcome. Unfortunately no system exists that combines the best characteristics of each technology in an intuitive presentation that any pilot can understand.
A helicopter flight in poor visibility is only an example of one scenario where Synthetic Vision is valuable. Boats, ships, hovercraft, airplanes, submarines, cars and trucks all find themselves in situations where a clear view outside, albeit a synthetic view, could spell the difference between successfully completing a mission or a disastrous collision. In each case, knowledge that the synthetic view had been validated by a ranging sensor (e.g. sonar on a submarine) would increase the confidence of the user of the display. In situations where stealth is required users might be satisfied if they knew that their synthetic database view had been recently updated by a friendly vehicle.
Reliance on static databases is a shortcoming of present Synthetic Vision systems. Synthetic Vision systems should instead treat their databases as information that changes dynamically. Visualization of the database (i.e. on a three-dimensional perspective Synthetic Vision display) should provide cues as to the reliability of the data presented.
To reach the full potential of Synthetic Vision, a system must be created that displays a Synthetic Vision database in three-dimensional perspective according to a user's attitude and position while simultaneously providing intuitive cues that help the user understand which data has been validated.