The inventive concepts disclosed herein pertain generally to the field of aircraft display units that present information to the crew of an aircraft.
Traditionally, a Synthetic Vision System (SVS) employed by an aircraft generates a three-dimensional synthetic image of a scene that is external to the aircraft (external scene). To generate the synthetic image, the SVS could acquire elevation data from a database. For example, the database could be a terrain database configured to store terrain data representative of terrain elevations contained in digital elevation models (DEM).
Generally, the terrain data of a DEM are stored as grids, and each grid represents an area of terrain and is commonly referred to as a terrain cell. Different sizes of the grids could correspond to different resolutions of data, where relatively large-sized grids correspond to relatively low resolution data when compared to smaller-sized grids corresponding to higher resolution data. In addition to terrain data, the database may also be implemented as, but is not limited to, a database configured to store data representative of surface features such as, but not limited to, obstacles, buildings, lakes and rivers, runways, and paved or unpaved surfaces other than runways. Once a database is deployed onboard an aircraft, the data could become outdated or stale due to natural forces such as wind, water, and ice and/or human activity such as the movements of people, vehicles, and construction activity.
Where low resolution data is employed, the elevations of multiple structures located within one terrain cell will not be stored. If these structures are included in one terrain cell containing one elevation value from which a synthetic image is generated, then the individuality of these discrete structures may not be individually and discretely rendered in the synthetic image. In such a case, the SVS may be unable to generate in a meaningful fashion multiple elevations within one cell, and the pilot would not be presented with an image representative of the external scene.
In addition to the SVS, an aircraft could employ an Enhanced Vision System (EVS) that employs one or more image capturing devices and/or a separate processor to generate enhanced image data representative of a real-world, three-dimensional image of the external scene. Radar systems and Light Detecting and Ranging (LIDAR) systems employ image capturing devices.
Generally, the radar and LIDAR systems may control the direction of an electromagnetic or photonic beam, respectively, by steering the beam horizontally and vertically as it is being transmitted during a sweep of a three-dimensional zone. When the beam strikes or reflects off an object, part of the energy is reflected back and received by the active sensors. The range of the object may be determined by measuring the elapsed time between the transmission and reception of the beam. The azimuth of the object may be determined as the angle to which the beam was steered in the horizontal direction relative to the longitudinal axis of the aircraft during the transmission/reception of the beam. The elevation or elevation angle of the object may be determined as the angle to which the beam was steered in the vertical direction relative to the longitudinal axis of the aircraft during the transmission/reception of the beam. During each sweep of the three-dimensional zone, the range, azimuth, and elevation of each point located within the zone may be collected and registered to produce point cloud data from which enhanced image data comprised of point cloud image data may be produced.