An enhanced flight vision system (EFVS) is sometimes used in by an aircraft flight crew in order to provide imagery of an airport terminal area and runway environment to a flight crew during times when meteorological conditions prevent a clear natural view of the of the external surroundings of the aircraft through the windscreen. For example, the EFVS may overlay an image of an airport terminal area and runway environment over the pilot's natural unaided view of the external surroundings of the aircraft through the aircraft's cockpit windscreen. Such imagery may improve the situation awareness of the flight crew during instrument approach procedures in low visibility conditions such as fog. For example, under Title 14 of the Code of Federal Regulations, part 91, a pilot may not descend below decision altitude (DA) or minimum descent altitude (MDA) to 100 feet above the touchdown zone elevation (TDZE) from a straight-in instrument approach procedure (IAP), other than Category II or Category III, unless the pilot can see certain required visual references. Such visual references include, for example, the approach lighting system, the threshold lighting system, and the runway edge lighting system. The pilot may, however, use an EFVS to identify the required visual references in low visibility conditions where the pilots natural unaided vision is unable to identify these visual references. Accordingly, the use of an EFVS may minimize losses due to the inability of the pilot to land the plane and deliver cargo and/or passengers on time in low visibility conditions.
EFVS imagery is typically presented to the pilot flying (PF) on a head up display (HUD). The HUD is typically a transparent display device that allows the PF to view EFVS imagery while looking at the external surroundings of the aircraft through the cockpit windscreen. As long as visibility conditions outside of the aircraft permit the PF to see the external surroundings of the aircraft through the cockpit windscreen, the PF can verify that the EFVS is functioning properly such that the imagery on the HUD is in alignment with the PF's view of the external surroundings of the aircraft.
EFVS imagery is sometimes also presented to the pilot monitoring (PM) on a head down display (HDD). For example, in some countries, the system must present the EFVS imagery to the PM for confirmation that the EFVS information is a reliable and accurate indicator of the required visual references. The PM may also use the EFVS imagery to determine whether the PF is taking appropriate action during approach and landing procedures. The HDD is typically a non-transparent display device mounted adjacent to or within a console or instrument panel of the aircraft. The HDD is typically positioned such that the PM must look away from the cockpit windscreen in order to see the displayed imagery, and the PM is unable to see the external surroundings of the aircraft through the cockpit windscreen while viewing the HDD. As such, the HDD does not overlay the EFVS image over the PM's natural unaided view of the external surroundings of the aircraft. Without the context of the external surroundings, it is very difficult for the PM to detect problems in the EFVS imagery beyond gross failures such as loss of image or white-out images.
An EFVS typically uses either a passive or active sensing system to acquire data used to generate imagery of the airport terminal area and runway environment. A typical passive sensor, such as a forward looking infrared (FLIR) camera or visible light spectrum camera, receives electromagnetic energy from the environment and outputs data that may be used by the system to generate video images from the point of view of the camera. The camera is installed in an appropriate position, such as in the nose of an aircraft, so that the PF may be presented with an appropriately scaled and positioned video image on the HUD having nearly the same point of view as the PF when viewing the external surroundings of the aircraft through the HUD. However, while passive sensors provide higher quality video imagery, they may be unable to identify required visual references in certain low visibility conditions such as heavy fog.
Active sensing systems, such as millimeter wavelength radar systems (e.g., 94 GHz), transmit electromagnetic energy into the environment and then receive return electromagnetic energy reflected from the environment. The active sensing system is typically installed in an appropriate position, such as in the nose of an aircraft. Active sensing systems do not generate the same video imagery as passive sensing systems, but rather map the received return energy into three-dimensional (3-D) models (e.g., using a polar coordinate system with range, azimuth and elevation from the nose of an aircraft). The 3-D model may then be rendered into a two-dimensional (2-D) image that may be appropriately scaled, positioned, and presented to the PF on the HUD in much the same way as video imagery from a passive sensing system. However, while active millimeter wavelength radar systems provide better identification of required visual references than passive sensing systems in low visibility conditions such as heavy fog, the quality of the imagery is not as good.
Additionally, both passive FLIR cameras and active millimeter wavelength radar systems may have limited range in certain low visibility conditions such as heavy fog. Furthermore, regardless of the sensor technology used, current EFVS designs generate image views that are positioned and scaled with respect to a point of view useful for a PF using a HUD and having additional the additional context of the external surroundings of the aircraft while looking through the cockpit windscreen. The EFVS images are not rescaled or repositioned or provided with any additional flight information symbology or situational context for use by a PM when viewed on an HDD. Such EFVS images alone are of limited used to a PM using an HDD to verify the reliability and accuracy of the EFVS and/or to determine that the PF is taking appropriate action during approach and landing procedures. There is an ongoing need for an improved EFVS having a sensing system tuned to identify visual references required for aircraft approach and landing in low visibility conditions with sufficient accuracy and range. There is yet further need for an improved EFVS capable of providing imagery on an HDD that is useful to a PM to verify the reliability and accuracy of the EFVS, and to determine that the PF is taking appropriate action during approach and landing procedures.