1. Field of the Invention
The present invention relates generally to range detection and collision avoidance. More specifically, the present invention relates to apparatus, optical systems, program product, and methods for passively sensing and avoiding aerial targets.
2. Description of the Related Art
The American Society for Testing and Materials (ASTM) International established Committee F-38 on unmanned aircraft systems (UASs) or unmanned aerial vehicles (UAVs) to identify design and performance standards for airborne sense-and-avoid systems. The committee has recently issued standards that require a UAS/UAV to be able to detect and avoid another airborne object within a field of regard of ±15 degrees Elevation and ±110 degrees Azimuth and to be able to respond so that collision is avoided by at least 500 ft. The 500 ft safety bubble is derived from the commonly accepted definition of what constitutes a near mid-air collision. The inventors have recognized that the standard will likely be incorporated by reference in eventual Federal Aviation Administration (FAA) certification requirements. The inventors have also recognized that in order to meet such standard in both compliant and noncompliant environments, detection of a target, for example, having an optical cross-section of a Cessna 172 aircraft (approximately 22.5 square meters) at a range of at least approximately 10 kilometers with a probability of detection of 90% or better, for a single “look,” at night and in bad weather, would be desirable, if not required.
Typical current attempts at passive ranging with electro-optical (EO) or infrared (IR) sensors onboard a UAS have involved performing maneuvers by the UAS in order to speed convergence of tracking algorithms which are utilizing angle-angle only data. Such maneuvers, however, disrupt the operational mission of the UAS and can unwittingly highlight the location of the UAS in a hostile environment. Also, besides long convergence times, such systems suffer from high false alarm rates. Due to such poor performance, active sensors, such as radar, are being added to help overcome the problems which result in a loss of any existing “low observability” capability while using the sense and avoid system. Recognized by the inventors is that it would be very beneficial if other methods could be used to determine the range of an aerial target/potential obstacle using passive EO and/or IR sensors without requiring the UAS to maneuver to help estimate range to target. There are several ways this might be accomplished in a non-cooperative environment. For example, one method employed by ground vehicles is the use of stereo optics. This method typically suffers from high cost due to double optics, sensor alignment and vibration problems, along with high computational costs, large baseline separation requirements, and dual tracking problems. The large field of regard needed to meet the sense and avoid design space can also present a major problem as multiple sets of data may be required. This could also require multiple stereo optics or a steerable stereo optics sensor system.
Several methods exist which do not require stereo optics. These include single optics systems employing the Depth from Focus (DFF) method, the split-prism focusing method, and the Depth from Defocus (DFD) method. The DFF method is relatively simple in that the range of the object is determined by focusing the object on an image detector in a camera system and, using the camera settings and known lens characteristics, solving an equation to determine the distance from a reference point within the camera. DFF has several disadvantages. For example, multiple images (at least 20 or 30 or more) must generally be taken using at least one different camera parameter, e.g., one for each range, and the camera setting providing the sharpest image must be identified. Accordingly, such methodology can be relatively slow, both in acquiring the imaged data, and in resolving the data. Such method can also require a great deal of system resources. Further, as the distance between the imaged point and the surface of exact focus increase or decreases, the imaged objects become progressively more defocused. Similarly, the split-prism focusing method requires a new operation for every potential target.
The DFD method has advantages over the DFF method (and the split-prism method). For example, depending upon the environmental conditions, DFD may require processing as little as about 2-3 as compared to a large number of images in the DFF method. As such, the inventors have recognized that a complete range map can be made from as little as two or three images using DFD, while even under the most idealistic conditions, the DFF and split-prism methods would require at least one image for every target which would resultantly require mechanically adjusting the focus to optimal for each one in turn. The DFD method, however, does require an accurate camera calibration for the camera characteristics (e.g., point spread function as a function of different camera parameters) which the DFF and split-prism methods do not. Nevertheless, as an aerial environment can produce multiple simultaneous potential obstacles within a field of view, which would precipitate a requirement to know the range of each of the potential obstacles, the inventors have recognized that, due to the requirement for relatively few images (down to as little as a single defocus for a complete range map), the advantages of DFD outweigh the disadvantages.
Testing, via computer modeling, was performed on various types of sensors to include: a long wave wide-angle uncooled IR focal plane array sensor system represented by a 1024×768 Uncooled IR Focal Plane Array sensor including 65 micron pixel pitch detectors and by a 1024×768 Uncooled IR Focal Plane Array sensor including 25 micron pixel pitch detectors with short focal lengths. Each failed to perform to the desired standards even minimally. By extending the focal length of the array having 25 micron pixel pitch detectors (narrowing the field of view), marginal performance was achieved, however, in order to cover a field of regard of 30 degrees vertical by 44 degrees horizontal, it required 10×11=110 snap shots at the narrower field of view, and a way to change where the narrow field of view optics are looking at, would be required. Testing, via computer modeling, was also performed: using a wide-angle small lowlight television (LLTV) or night vision sensor system represented by a monochrome CCD camera having detector resolution of 1400×1024 pixels which performed poorly, and using narrow field of view (150 mm focal length) optics for the LLTV sensor which performed somewhat adequately in good weather and good illumination conditions only. It was determine that a wide angle sensor system using five LWIR sensors to cover the required field of regard with depth of defocus sensing and processing capabilities would potentially be adequate for a high performance aircraft that can pull 5 Gs or more. Recognized by the inventors, however, is that such configuration would, in many instances, be inadequate for a less maneuverable aircraft such as a UAS that can only pull less than 2 Gs. Further, recognized by the inventors is that existing algorithms which have been published fail to provide for determining range at long ranges (e.g. 10 kilometers) due to atmospheric effects, and thus, would not be capable of performing to the prescribed standards unless possibly the air is perfectly still and atmospheric loss of signal is taken into account.
The inventors have further recognized that it would not be necessary to cover the entire field of regard at a narrow field of view if data collected during a wide field of view operations was properly utilized, and that an enhanced optical system would detect most of the small targets using the wide field of view by allowing a high false alarm rate and multiple looks. Accordingly, a common optics system providing the advantages of both a wide angle field of view and narrow angle field of view, would achieve reliable warning in time to avoid non-compliant high speed aircraft on a collision course with the UAS. Once a set of possible targets is collected at wide angle, then a narrow field of view operation can be employed to confirm true targets with good range data and eliminate most of the false alarms during the process. This could potentially take a lot less than 110 snap shots at the narrow field of view for the target confirmations and false alarms elimination steps. Also recognized is that operations between the wide field of view and the narrow wide field of view could be interleaved so that only a fraction of a second occurs between wide field of view snap shots, and track filtering could also be used as well to help eliminate some of the false alarms. The optical system could include a beam splitter or other light divider, but at the cost of some light. Alternatively, a mirror located after the primary lens of the sensor system which deflects the focused rays of light through a different optical path could be employed to allow for interleaved operation with little loss of light. Further, a piezo-electric device or other mechanism could be used to move a detector array or other image sensor back and forth relatively quickly along the optical axis to produced focused and defocused images, or defocused and more defocused images.
Correspondingly, the inventors have recognized the need for apparatus, optical systems, program product, and methods for providing passive sensing and facilitating avoiding airborne obstacles which can provide image acquisition using both narrow field and wide fields at substantially the same time, which can allow the UAS to detect and avoid another airborne object having a 22.5 square meters optical cross-section within a field of regard of ±15 degrees Elevation and ±110 degrees Azimuth at a range of at least approximately 10 kilometers with a probability of detection of 90% or better at night and in bad weather, and which can provide data to cause the UAS to respond so that a collision is avoided by at least 500 ft.