The need for early warning of an invading force dates back to the ancient world, which could involve scouts traveling on foot or by horse back, and eventually included use of telescopes and binoculars for an enhanced view of the enemy. In the modem era, beginning in World War II, radio detection and ranging (RADAR) systems were deployed, and which utilize reflected radio waves to identify the position of enemy aircraft. Sonar similarly uses sound waves to locate vessels within the oceans. Soon after the development of laser technologies, light/laser detection and ranging (LIDAR/LADAR) systems underwent development.
LADAR is generally based on emitting short pulses of light within certain Field-Of-View (FOV) at precisely-controlled moments, collecting the reflected light and determining its Time-of-Arrival, possibly, separately from different directions. Subtraction of the pulse emission time from ToA yields ToF, and that, in turns, allows one to determine the distance to the target the light was reflected from. LADAR is the most promising vision technology for autonomous vehicles of different kinds, as well as surveillance, security, industrial automation, and many other areas, where detailed information about the immediate surroundings is required. While lacking the range of radar, LADAR has a much higher resolution due to much shorter wavelengths that are used for sensing, and hence, comparatively relaxed diffractive limitations. It may be especially useful for moving vehicles, both manned, self-driving, and unmanned, if it could provide detailed 3D information in real-time, with the potential to revolutionize vehicles' sensing abilities and enable a variety of missions and applications.
However, until recently, LADARs have been prohibitively large and expensive for vehicular use. They were also lacking in desirable performance: to become a true real-time vision technology, LADAR must provide high-resolution imagery, at high frame rates, comparable with video cameras, in the range of 15-60 fps, and cover a substantial solid angle. Ideally, a LADAR with omnidirectional coverage of 360° azimuth and 180° elevation would be very beneficial. Collectively, these requirements may be called “real-time 3D vision”.
A variety of approaches has been suggested and tried, including mechanical scanning, non-mechanical scanning, and imaging time-of-flight (ToF) focal-plane arrays (FPA). There is also a variety of laser types, detectors, signal processing techniques, etc. that have been used to date, as shown by the following.
U.S. Patent Application Pub. No. 2012/0170029 by Azzazy teaches 2D focal plane array (FPA) in the form of a micro-channel plate, illuminated in its entirety by a short power pulse of light. This arrangement is generally known as flash LADAR.
U.S. Patent Application Pub. No. 2012/0261516 by Gilliland teaches another embodiment of flash LADAR, with a two dimensional array of avalanche photodiodes illuminated in its entirety as well.
U.S. Patent Application Pub. No. 2007/0035624 by Lubard teaches a similar arrangement with a 1D array of detectors, still illuminated together, while U.S. Pat. No. 6,882,409 to Evans further adds sensitivity to different wavelengths to flash LADAR. Another approach is the use a 2D scanner and only one detector receiving reflected light sequentially from every point in the FOV, as taught by US Patent Application Pub. No. 201210236379 by da Silva.
Additional improvements to this approach are offered by U.S. Pat. No. 9,383,753 to Templeton, teaching a synchronous scan of the FOV of a single receiver via an array of synchronized MEMS mirrors. This arrangement is known as retro-reflective.
Yet another approach is to combine multiple lasers and multiple detectors in a single scanned FOV, as taught by U.S. Pat. No. 8,767,190 to Hall.
However, presently, such a real-time 3D LADAR remains elusive.