a) Three-Dimensional Imaging
Some systems and methods for accomplishing these goals are conventional in the field of so-called “lidar”, or “light detection and ranging”—analogous to the better-known “radar” that uses the radio portions of the electromagnetic spectrum. Because most lidar systems use pulsed lasers as excitation, the acronym “lidar” is sometimes said to instead represent “laser illumination detection and ranging”.
In a lidar system, a sharp pulse of light is projected toward an object, or field of objects, that is of interest. The object or objects reflect—for turbid media a more descriptive term is “scatter”—a portion of this excitation radiation back toward the system, where the return radiation is time resolved.
As in radar, round-trip propagation times for the radiation form a measure of the distances, or ranges, from the apparatus to the respective objects. Radar, however, simply due to the much longer wavelengths it employs, cannot provide the resolution available with lidar.
High-resolution lidar imaging provides fully three-dimensional images of far higher resolution, on one hand, and that also have distinct advantages in comparison to common two-dimensional imaging (e. g. photographs) on the other hand. As compared with such ordinary two-dimensional images, some of the advantages provided by the additional range information are the ability to remove clutter, to accurately discriminate decoys from objects of real interest, and to provide additional criteria for detection and classification.
High-resolution three-dimensional imaging may provide volumetric pixel sizes of approximately 0.75 mrad by 0.75 mrad by 7.5 cm. Such imaging requires high bandwidth (2 GHz) lidar receivers with small instantaneous fields of view (IFOV) and many pixels in the two-dimensional imaging directions.
Key to these capabilities is effective and very fine time-resolution of the return optical signals—ordinarily by a streak tube, although modernly very fast electronics can be substituted in relatively less-demanding applications. Such applications particularly include measurements at the scale of ocean volumes, in which temporal resolution may be in meters rather than centimeters.
Finer work, especially including laboratory-scale measurement or ultimately medical ranging with resolution finer than a millimeter, appears to exceed current-day speed and resolution capabilities of electronics and accordingly calls for a streak tube. To use such a device for three-dimensional imaging, the laser pulses must be visible or shorter-wavelength light—so that the optical return pulse 21 (FIG. 1) from the object or objects is likewise visible or ultraviolet light 22. (While visible lidar excitation is hazardous because it damages the retina, shorter-wavelength excitation too is hazardous due to damage to the lens of the eye.) In either event, the optical return is made to take the form of a substantially one-dimensional image (i. e. slit-shaped, extending in and out of the plane of FIG. 1), or is reformatted 23 as such an image.
In response to that unidimensional optical input 22, in the form of visible or UV light, a photocathode screen 24 of the streak tube 18 forms a one-dimensional electronic image 25, which is refined by electron-imaging components 26 within the streak tube. (It will be understood that some very special streak-tube photocathodes have been developed to handle wavelengths other than visible; however, these are not at all commercial materials, and the use of some such photocathode technologies introduces synchronization problems and other drawbacks.)
Depending on any image reformatting that may be performed upstream of the streak tube 18, position along these unidimensional optical and electronic images 22, 25 may either represent location along a simple thin image slice of the object field, or represent position in a very complex composite, positionally encoded version of a two-dimensional scene. This will be explained shortly.
Within the streak tube, a very rapidly varying electrical deflection voltage 28, applied across deflection electrodes 27, sweeps 29 the one-dimensional electronic image 25 quickly down a phosphor-coated surface 31, forming a two-dimensional visible image on the phosphor screen. The sweep direction 29 then represents time—and accordingly distance, to each backscattering object—while the orthogonal direction on the screen (again, extending in and out of the plane of FIG. 1) represents position along the input optical image, whether a simple image slice or an encoded scene.
The patents mentioned above introduce considerable detail as to behavior and use of a streak tube. They also may represent the highest development of a form of lidar imaging familiarly known as “pushbroom”—because data are accumulated a little at a time, in thin strips transverse to a direction of motion.
Relative motion between the apparatus and the object field is provided, as for instance by operating the apparatus in an aircraft that makes regular advance over a volume of seawater, while laser-beam pulses are projected toward the water. The pulsed laser beam is formed into the shape of a thin fan—the thin dimension of the fan-shaped beam being oriented along the “track” (direction) of this relative motion.
In some laboratory-scale systems it is more convenient to instead scan an object or object field past a stationary lidar transceiver. Hence in either instance the broadly diverging wide dimension of the fan beam, often called the “cross-track” dimension, is at right angles to the direction of motion: this is the above-mentioned case of direct physical correspondence between the unidimensional optical or electronic image and a real slice of an object image. The Gleckler patent mentioned above, however, shows that two or more such one-dimensional images can be processed simultaneously—yielding a corresponding number of time-resolved pulse returns.
Each laser pulse thus generates at the receiver, after time-resolution of the return pulse, at least one two-dimensional snap-shot data set representing range (time) vs. azimuth (cross-track detail) for the instantaneous relative position of the system and object field. Successive pulses, projected and captured during the continuing relative motion, provide many further data frames to complete a third dimension of the volumetric image.
The resulting three-dimensional image can be visualized simply by directly observing the streak-tube phosphor screen, or by capturing the screen display with a CCD or other camera at the frame rate (one frame per initiating laser pulse) for later viewing. Another option is to analyze the captured data, e. g. in a computer, by any of myriad application-appropriate algorithms.
Alternative to pushbroom imaging is so-called “flash” lidar, represented by patents Re. 33,865 and U.S. Pat. No. 5,412,372 of Knight and Alfano respectively. Here the excitation pulse is ideally formed into a substantially rectangular beam to illuminate the entire object or object field at once.
The resulting backscatter pulse, correspondingly, is all time resolved concurrently—typically requiring, at least for a streak-tube, temporary mapping of the two-dimensional return into a one-dimensional (i. e. line) image that the tube can sweep. Such mapping, in the cited patents, is performed by a custom fiber-optic prism.
This sort of mapping may be done in a very great variety of ways. For example successive raster-equivalent optical-image slices can be placed end-to-end along the input photocathode, or individual pixels can be subjected to a completely arbitrary reassignment to positions along the cathode. Any mapping intermediate between these extremes is also possible.
After time-resolution if desired the data can be remapped to recover a multiplicity of original two-dimensional image-data frames—each now having its family of ranged variants. If preferred the full three-dimensional data set can be unfolded in some other way for analysis as desired.
b) The Wavelength Limitation
Streak-tube imaging lidar is thus a proven technology, demonstrated in both pushbroom and flash configurations.1, 2 Unfortunately, however, it is heretofore usable only in the visible-ultraviolet portion of the electromagnetic spectrum, whereas several important applications favor operation in longer-wavelength spectral regions.
A critical group of applications relates to so-called “eye safe” requirements for many operating environments. The human eye is extremely sensitive to visible radiation. Severe retinal damage can occur if someone is exposed to radiation transmitted by a conventional streak-tube lidar system.
In the near-infrared (NIR), by comparison, there is far less human sensitivity and likewise less risk. Maximum permissible exposure for NIR radiation at a wavelength of 1.54 μm is typically three orders of magnitude greater than at 532 nm. The main reason is that the lens of the eye does not focus NIR radiation onto the retina.
Consequently, in applications where humans might be exposed to the transmitted light, it is desirable to operate the lidar at the longer wavelength. In addition, radiation at 1.54 μm is invisible to the human eye, yielding the advantage of inconspicuous operation—which is desirable in many applications.
Limitation to the visible/UV is in a sense somewhat artificial, arising as it does merely from lack of a commercial streak tube with a photocathode sensitive to nonvisible radiation—even though NIR-sensitive photocathode materials exist.3 The vendor neither produces streak tubes nor will provide the photocathode materials to streak-tube vendors. No streak-tube vendor is currently offering high-quantum-efficiency NIR streak tubes.
The near-infrared, however, is far from the only spectral region in which lidar operation would be very advantageous. The more-remote infrared portion of the electromagnetic spectrum (3 to 12 μm) overlaps strong absorption features of many molecules. As a result wavelengths in this region are particularly attractive for monitoring gaseous contaminant concentrations such as those encountered in atmospheric pollution or industrial process control.
CO2 lasers operating at 9 to 11 μm can produce high power and have been deployed in space for a number of applications. As will appear from a later section of this document, the present invention is well suited for use with CO2-laser-based imaging lidar systems.
Moreover, in other fields of optical measurement and analysis it is possible to make differential or ratio measurements—for example, differential absorption spectroscopy, and other analogous plural- or multispectral investigations. Heretofore this has not been practical in the lidar field, even for measurements comparing and contrasting measurements as between the visible and ultraviolet.
c) Other Technology not Heretofore Associated with Lidar
U.S. Pat. No. 6,349,016 of Larry Coldren is representative of advanced sophistication in a field previously related only to optical communications, optical switching and the like. To the best of the knowledge of the present inventors, that field has never previously been connected with lidar operations or any other form of three-dimensional imaging.
Tabulated below is other related work of Costello et al.3 and Francis et al.,7 as well as related commercial product literature.8, 9 These materials too are essentially representative of modern advances in optical switching and communications, unconnected with lidar.
d) Conclusion
As can now be seen, the related art fails to resolve the previously described problems of lidar unavailability for operation outside the visible wavelength region. The efforts outlined above, although praiseworthy, leave room for considerable refinement.