In the past, pilots of vehicles such as aircraft have relied on unaided vision, control tower commands, and radar for situational awareness such as relative position and velocity of nearby objects and vehicles. Objects detected by radar are projected on a planar screen from a “birds eye view” as seen from high above the vehicle. Some systems display images from visible-light vehicle-mounted cameras on a cockpit screen to assist the pilot in understanding his or her surroundings. These systems provide a fixed point of view (POV) based on camera location and have limited range in low light, dust, and/or foggy environments. Still other systems incorporate infrared (IR) cameras to augment visible-light cameras, but IR images have limited resolution. Visible-light and IR cameras are typically placed in or near the nose of an aircraft to better approximate the pilot's POV. This location limits the size and weight of such cameras, thereby limiting their performance.
In still some other systems, ground-based cameras provide images to the pilot. Unfortunately, these images are provided from the point of view of the camera taking the image and not from the point of view of the pilot. As a result, the pilot's mental workload is increased by requiring the pilot to imagine the aircraft's actual position while looking at an image generated from a point of view different than his own. This is particularly troublesome during landings when the pilot's workload is already heavy.