1. Field of the Invention
Embodiments of the present invention relate generally to imaging and, more particularly, relate to methods and apparatus for superresolution imaging of a target.
2. Description of Related Art
The achievable resolution of a conventional telescope system is fundamentally limited by optical diffraction such that the size of the smallest resolvable features is approximately: rdiff=(λ Z)/D wherein λ is the optical wavelength; Z is the distance to the object and D is the collection aperture width. For non-circular telescope apertures, it is noted that resolution will be direction dependent with the resolution associated with a particular transverse direction being inversely proportional to the width of the aperture in the associated direction (where the optical system has been “unfolded” such that the optical beam has no twists or turns). For example, for a rectangular aperture of width Dx and height Dy, the diffraction limited resolution of the image in the horizontal and vertical directions will be, respectively, (λ Z)/Dy and (λ Z)/Dy.
Although improved resolution is desired in some applications, better resolution requires generally larger aperture sizes. However, the design time and difficulty, fabrication time and difficulty, weight and cost of a telescope system increase rapidly and nonlinearly with size. As such, superresolution techniques that improve resolution while circumventing the diffraction limit are of interest to reduce the high cost of high resolution telescope systems and to reduce the size and weight needed to achieve a desired performance. Such superresolution techniques would ideally be able to resolve and discriminate features many times finer than the diffraction limit associated with the width of the physical receiving aperture.
One superresolution technique is based on mathematical techniques to implicitly or explicitly estimate spatial frequency content beyond the hard cutoff frequency corresponding to the highest frequency of the passband associated with the diffraction limit. This estimation is enabled through use of various physical constraints on the intensity properties of objects such as finite extent positivity and/or by statistical knowledge of the object's spatial intensity distribution function (e.g., it is composed of homogenous segments with sharp contrast transitions). By restoring lost frequencies beyond the passband, fine-scale details are restored. Although these techniques have been pursued for some time, these techniques have not been shown to extend resolution beyond the diffraction limit by more than ˜5-20%.
Other superresolution techniques include superresolution by aperture coding and superresolution by image sharpening which are accomplished by two distinct methods, but which have the same effect. In particular, aperture coding modulates the transmission amplitude and/or phase of the coherent transfer function of the telescope system, while image sharpening uses linear or non-linear post-processing methods to digitally amplify high frequency Fourier components of the image. However, both techniques act to adjust the shape of the blur function by making the center lobe sharper at the price of placing more energy in the higher order diffractive lobes.
Optical synthetic aperture radar (OSAR) is the optical analog to lower wavelength electromagnetic radiation (e.g., radio and microwave radiation) synthetic aperture radar (SAR) systems. OSAR achieves spatial resolution through temporally modulated active illumination, and accurate temporally resolved measurement of the amplitude and phase of reflected radiation pulses, while the relative motion between the imaging system and target causes the imaging system to sweep out an arc in relation to the target referred to as the synthetic aperture. The synthetic aperture angle may correspond to the angular rotation of a spinning object during the image collection time, or the angle subtended by relative translational motion of the image system with respect to the object, or a combination thereof. The imaging system operates by transmitting sequences of laser pulses at the object, collecting the reflected returns with a receiver aperture, optically mixing (e.g., via a heterodyne) the received light with a local optical beam that is coherent with the transmitted beam, focusing all of the mixed light onto a heterodyne detector which measures the temporal varying amplitude and phase of the total received signal, and computationally processing the signal (analog, digital, or hybrid) to form the image. The x-y dimensions of an OSAR image correspond to the angle and range of the target relative to the image system, rather than the angle-angle produced by conventional passive systems. The range resolution of an OSAR system is related to the spectral bandwidth of the pulses which are of either short duration or are frequency modulated (i.e., chirped). The angle resolution of the OSAR image is limited by the diffraction limit of the synthetic aperture, e.g., by the angular extent of the arc swept out by the relative motion. Thus, the resolution is disassociated with the physical width of the collection aperture and, as a result, circumvents the normal diffraction limit associated with the physical aperture of the receiver. However, a challenge of OSAR is the requirements for long coherence length sources, and accurate control and/or knowledge of the phase relations between the pulses.
Also, a single OSAR image has a signal to noise ratio (SNR) of 1 as a result of laser speckle noise. Many images must be produced and averaged to produce a higher SNR image. OSAR is also extremely sensitive to vibrations or any movement at optical scales within the field of view (FOV) (e.g., motion of foliage from wind, vibrating surfaces, moving car/person, etc.). Because the x and y axes of an OSAR image are angle and range, image appearance is significantly different from conventional imaging systems for which the image x and y axes correspond to horizontal and vertical angles just like direct human vision and conventional telescopes produce. Furthermore, perception of the image is complicated since the image is tomographic in nature such that multiple points on the object (i.e., points lying along a line in the projection direction) are mapped to a single point in the image, e.g., the signal in an image pixel can correspond to points on the front and back surface of an object. As a result of these complexities, interpretation of SAR-type imagery often requires special training.
Fourier telescopy uses similar principles to OSAR. Fourier telescopy uses an array of transmitters and a single non-imaging (e.g., bucket) collector, that measures only the temporal intensity variations of the reflected signal, but not the phase. The basic principle of Fourier telescopy is to illuminate the object with pairs of transmitters powered by a common laser source so the light is highly spatially coherent between the transmitters such that the beams interfere strongly at the target to produce a sinusoidal fringe pattern. Adjusting the phase of one of the transmitters with respect to the other causes the fringe pattern to slide across the target. Measurement of the return provides direct measurement of the in-phase and quadrature parts of a Fourier component of the object's spatial reflectivity distribution. Repeating for various pairings of transmitters produces fringe patterns of different orientation and spatial period, allowing the Fourier transform of the object's reflectivity distribution to be sampled at many points within some footprint in the Fourier domain. In the simplest configuration, the transmitters are equally spaced to form an “L” pattern so all possible pairings corresponds to a regularly spaced rectilinear grid of samples of the Fourier transform of the object's reflectivity function. In this case, the spatial domain image is directly produced by inverse Fourier transforming. Additional complexities involve illumination with 3 transmitters simultaneously and amplitude modulation coding which allows the relative phases between the Fourier components to be measured in a way that is insensitive to spatial modulation of the wavefront phase by atmospheric turbulence (referred to as the phase-closure approach). The diffraction limited resolution of the Fourier telescopy system is related to the effective aperture size corresponding to the width of the transmitter array in each direction. The field-of-view (FOV) is inversely proportional to the smallest separation between transmitter elements, and thus the number of resolution elements of the produced image in the vertical and horizontal directions is equal to the number of transmitter elements in the corresponding directions.
However, the vertical (or horizontal) angular FOV of the image is equal to 2 times the wavelength divided by the spacing of the transmitters in the vertical (or horizontal) direction. Since the angular resolution of the image is equal to the wavelength divided by the longest baseline of the transmitter array, an “L”-shaped array with 10 transmitters on each leg would only form an image of 20×20 resolution elements. A larger image can be built up from a patchwork of several small ones. However, the avoidance of aliasing problems requires that the laser illumination beam not illuminate an area larger than the FOV associated with the transmitter spacing. This limitation implies that the transmitter apertures should be of width equal to their spacing in order to adequately focus the beam such that the majority of the energy lies within the FOV. For ground-to-space imaging of satellites and space objects, this issue goes away if the extent of the object is smaller than the FOV since there is no background to the object to reflect light. Additionally, a single image has a SNR of 1 as result of laser speckle noise. Many images must be produced and averaged to produce a higher SNR image.
Another type of superresolution technique is to capture a set of conventional images (e.g., using a normal telescope and detector) with the object illuminated by a set of modulation patterns with spatial variations at scales significantly smaller than the diffraction limit of the telescope. See Eyal Ben-Eliezer and Emmanuel Marom, “Aberration-free superresolution imaging via binary speckle pattern encoding and processing,” J. Opt. Soc. Amer. A 24(4), 1003-1010 (2007) (hereinafter the “Ben-Eliezer article”); E. Barrett, D. W. Tyler, P. M. Payton, K. Ip, D. N. Christie, “New approaches to image super-resolution beyond the diffraction limit,” Proc. SPIE 6712, D1-D14 (2007) (hereinafter the “Barrett article”); D. Tyler, E. B. Barrett, “Simulation of a passive-grating heterodyning superresolution concept,” Proc. SPIE 7094 (2008) (hereinafter the “Tyler article”); and S. A. Shroff, J. Fienup and D. R. Williams, “OTF compensation in structured illumination superresolution images,” Proc. SPIE 7094 (2008) (hereinafter the “Shroff article”). As described in the Barrett article, high-frequency spatial content (e.g., fine scale features) of the object beyond the diffraction limited resolution cutoff will be aliased down into the pass-band of the optical system. By projecting a set of illumination patterns with spatial fluctuations of scales and orientations that densely cover a region in Fourier space M-times larger than the diffraction limited region, they comprise all information needed to generate an M-times higher resolution image. Many possible choices exist for the set of illumination patterns and for methods of combining the signals in this constituent image set to produce a final single high-resolution image. The Ben-Eliezer article uses a random set of illumination patterns, and forms the superresolved image by computing and algebraically combining various averages involving simple algebraic combinations (e.g., products, divisions, and subtractions) of the images. The Barrett and Tyler articles describe a deliberately chosen set of illumination patterns corresponding to sinusoids with a linear progression of two-dimensional (2D) sinusoid spatial frequency in the horizontal and vertical directions, followed by a direct least-squares combination to undo the alias-coded information and produce a single restored super-resolved image. The Shroff article describes a similar method, but the progression of sinusoids rotates through several angles.
Although the Ben-Eliezer and Barrett articles describe the reconstruction of superresolved images from experimental data, the experiments described by both articles require the placement of spatially modulated transparency masks in close proximity to the back-illuminated object to be imaged. In some applications, such as remote sensing applications, such spatially modulated transparency masks cannot be placed in close proximity to the object to be imaged. As such, improved techniques for capturing superresolved images, including superresolved images of remote targets, would be desirable.