The improvement of images using advanced image-processing techniques, for example, those described by Puetter et al. in “Digital Image Reconstruction: Deblurring and Denoising”, Ann. Rev. Astron. & Astrophys., 43, pp. 139-194 (2005), has advanced considerably over the last several decades, and such techniques are routinely applied to high-value data, such as satellite observations (Metcalf et al., 1996; Lawrence et al, 2007; Wilson et al., 2015) and astronomical surveys (Hiltner et al., 2003). There are many commercial and military applications, however, that have not yet employed advanced image-processing methods to improve their imagery. One such application is imaging of the Earth's surface from space.
Remote sensing and imaging are broad-based technologies having a number of diverse and extremely important practical applications—such as geological mapping and analysis, and meteorological forecasting. Aerial and satellite-based photography and imaging are especially useful remote imaging techniques that have, over recent years, become heavily reliant on the collection and processing of data for digital images, including spectral, spatial, elevation, and vehicle or platform location and orientation parameters. Spatial data—characterizing real estate improvements and locations, roads and highways, environmental hazards and conditions, utilities infrastructures (e.g., phone lines, pipelines), and geophysical features—can now be collected, processed, and communicated in a digital format to conveniently provide highly accurate mapping and surveillance data for various applications (e.g., dynamic GPS mapping).
Major challenges facing remote sensing and imaging applications are spatial resolution and spectral fidelity. Photographic issues, such as spherical aberrations, astigmatism, field curvature, distortion, and chromatic aberrations are well-known problems that must be dealt with in sensor/imaging applications. Certain applications require very high image resolution—often with tolerances of inches. Depending upon the particular system used (e.g., automobile, aircraft, satellite, space vehicle or platform), an actual digital imaging device may be located anywhere from several feet to miles from its target, resulting in a very large scale factor (plate scale). Providing images with very large scale factors, which also have resolution tolerances of inches, poses a challenge to even the most robust imaging system. Thus, conventional systems usually must make a trade-off between resolution quality and the size of a target area that can be imaged. If the system is designed to provide high-resolution digital images, i.e., “smaller” pixels (less solid angle per pixel), the field of view (FOV) of the imaging device is typically small. On the other hand, if the system provides a larger FOV, the system is going to employ “larger” pixels (greater solid angle per pixel) for maximum coverage, with a resulting a decrease in spatial resolution.
While imaging from high altitude and/or space is not new, and significant efforts and resources have been expended by the military for intelligence imagery, over the past few decades a new trend is the widespread use of Earth imagery collected by a number of commercial ventures. One of the newest trends is the use of very small, e.g., micro- and nano-satellites, such as “CubeSats”, to obtain even more widespread and low-cost imagery. Such CubeSats are typically deployed in a “constellation” of anywhere from ten to hundreds of small satellites. Currently, a number of commercial projects are underway to provide satellite-imaging-as-a-service to the industry, governments and the general public for applications including disaster relief, agriculture, and environmental monitoring, to name a few. As with the larger military intelligence satellites, these images of Earth from space are generally limited in their spatial resolution by diffraction, and further, since the cameras in such satellites are designed to meet diffraction limits, the signal-to-noise per pixel is largely the same from application to application (the solid angle subtended by a pixel times the telescope aperture size, i.e., the system étendue, is the same in nearly all applications). What does change with telescope size from application to application, even if system étendue is the same, is the required pointing accuracy and stability. Larger diffraction-limited telescopes have pixels that subtend a much smaller angle. Consequently, pointing accuracy and stability are much more critical, especially if the detectors are scanners and not large-field, two-dimensional detectors. This is because registration of the imagery requires reliable pointing (see, for example, the discussion in Puetter and Hier 2008). These requirements are loosened with smaller telescopes since the solid angles subtended by a pixel are larger. However, the fact that smaller telescopes are generally less expensive means that they usually have less sophisticated pointing systems. Fortunately, large-field, two-dimensional arrays, which possess native geometric stability in their imagery, can help a lot. In addition, multiple images can be aligned after they are collected to construct mosaics and higher signal-to-noise ratio (SNR) stacks of images.