The present invention relates to the generation of images from projection measurements. Examples of images-generated from projection measurements include two-dimensional and three-dimensional SAR (synthetic aperture radar) systems. SAR is a form of radar in which the large, highly-directional rotating antenna used by conventional radar is replaced with many low-directivity small stationary antennas scattered over some area near or around the target area. The many echo waveforms received at the different antenna positions are post-processed to resolve the target. SAR can be implemented by moving one or more antennas over relatively immobile targets, by placing multiple stationary antennas over a relatively large area, or combinations thereof. A further example of images generated from projection measurements are ISAR (inverse SAR) systems, which image objects and many features on the ground from satellites, aircraft, vehicles or any other moving platform. SAR and ISAR systems are used in detecting, locating and sometimes identifying ships, ground vehicles, mines, buried pipes, roadway faults, tunnels, leaking buried pipes, etc., as well as discovering and measuring geological features, forest features, mining volumes, etc., and general mapping. For example, as shown in FIG. 1 of U.S. Pat. No. 5,805,098 to McCorkle, hereby incorporated by reference, an aircraft mounted detector array is utilized to take ground radar measurements. Other examples of systems using projection measurements are fault inspection systems using acoustic imaging, submarine sonar for imaging underwater objects, seismic imaging system for tunnel detection, oil exploration, geological surveys, etc., and medical diagnostic tools such as sonograms, echocardiograms, x-ray CAT (computer-aided tomography) equipment and MRI (magnetic resonance imaging) equipment.
Synthetic aperture radar (SAR) systems have been used in many applications to provide area mapping, surveillance, and target detection. The radar is usually mounted on an aircraft or a ground-based vehicle configured with transmitting and receiving antennas to transmit and measure the reflected radar signals from the areas of interest. Through signal processing, the reflected radar signals collected along the flight path are combined to form the SAR image for the area along one side (side-looking mode) or in front of the radar (forward-looking mode).
There is a major challenge that SAR (or other imaging systems) must face. The resulting imagery is contaminated with 1) system noise (due to system components), 2) interference noise (due to internal and external sources), and 3) sidelobes from large targets. The first two types are additive noise and the last type (sidelobes) is multiplicative noise. These sources result in a high noise floor level in SAR imagery and reduce the ability of the radar system to detect small targets, especially if these targets are located in the proximity of larger objects (natural or manmade). For other systems such as medical imaging systems, the detection of small targets (subtle features, tumors) in the presence of noise and other large objects is also a big challenge.
There have been numerous techniques that have been developed to suppress the additive noise. Suppression of the multiplicative noise is a much more challenging task since the noise level (sidelobes) is proportional to the size (radar-cross-section) of the in-scene targets. Conventional shift-invariant windows have been used to reduce or suppress the sidelobe artifacts at the expense of resolution and a reduced signal-to-noise ratio (SNR). A family of spatially variant apodization techniques has been developed to address the sidelobes problem. These spatially variant apodization techniques generate nonlinear imagery (the phase information is not preserved in the resulting imagery). In U.S. Pat. No. 7,796,829, entitled “Method and System for Forming an Image with Enhanced Contrast and/or Reduced Noise” a nonlinear imaging technique called Recursive Sidelobe Minimization (RSM) was disclosed that significantly reduces the noise, level in real SAR imagery by 5□10 dB. More recently; ARL invented another method, Image Formation by Pixel Classification (IF-PC), which significantly improves the suppression of the noise level over Recursive Sidelobe Minimization mage Formation by Pixel Classification achieves state-of-the-art performance that generates virtually noise-free imagery. The key idea of Image Formation by Pixel Classification is to classify each pixel of a sequence of subaperture SAR images into a real object class or a noise class based on a magnitude of the pixel's normalized standard deviation. If the normalized standard deviation is larger than a threshold, the pixel is classified into the noise class. Otherwise, the pixel is classified into the target class. Despite its superior performance in noise suppression, Image Formation by Pixel Classification still has two important features that could be further improved. First, the Image Formation by Pixel Classification technique is still based on a nonlinear signal processing technique.
The pixel classification process (real object/noise) is computed using magnitude data, and thus, the results are also magnitude imagery. This type of imagery is appropriate for applications that only require magnitude information. However, the complex imagery contains much more information (phase and frequency response), which may be the key for target discrimination and classification. Although the target/noise classification information from Image Formation by Pixel Classification could be employed in conjunction with the baseline complex imagery to derive the noise reduced complex imagery, this indirect operation could result in the discontinuity in the complex imagery. Second, the pixel classification process in Image Formation by Pixel Classification depends on the statistics of a single pixel across multiple iterations, thus does not capture the local spatial correlation across many pixels from the same object.
Back Projection of SAR Image
Systems which produce images from projection data generally use techniques in the time domain, where a backprojection-type algorithm is used, or frequency domain, where Fourier transforms are used. For example, time domain backprojection-based techniques have been used for numerous applications, including x-ray CAT scans, MRI and sonograms. Historically, medical people have preferred backprojection because its artifact levels were lower than those using fast Fourier transform (FFT) approaches.
Synthetic aperture radar systems have been used in applications such as area mapping, surveillance, and target detection. The radar is usually mounted on an aircraft or a vehicle configured with transmitting and receiving antennas to transmit and measure the reflected radar signals from areas of interest. Through signal processing, the reflected radar signals along the flight path are combined to form the SAR imaging for side looking or forward looking surveillance.
SAR imaging is complex for a variety of reasons. First, the data is not inputted at equally distant (or known) points. Instead, data may be inputted in a non-uniform manner from an aircraft that is buffeted by the wind or from a ground vehicle that traverses rough ground. Therefore, motion compensation must be introduced in order to produce sharp images. Second, the subject objects need not be point sources but may be dispersive—where energy is stored and “re-radiated” over time. Ground penetrating SAR adds the complication that the media propagation velocity varies which complicates seismic processing. For many SAR applications, especially for high-resolution, ultra-wide-angle (UWA), ultra-wide-bandwidth (UWB) surveillance systems, the task is particularly problematic because the data sets are large, real-time operation is essential, and the aperture geometry is not controlled. For example, small aircraft buffeted by the wind can affect SAR data due to significant off-track motion and velocity changes. As a result, the data is not sampled at equally spaced intervals.
Backprojection techniques provide many advantages; including sharper images. Although prior art backprojector implementations may generate image artifacts; they are constrained to be local to the object generating the artifacts and generally lie within the theoretical sidelobes. Side lobes are the lobes of the radiation pattern that are not the main beam or lobe. In an antenna radiation pattern or beam pattern, the power density in the side lobes is generally much less than that in the main beam. It is generally desirable to minimize the sidelobe level (SLL), commonly measured in decibels relative to the peak of the main beam. The concepts of main and side lobes apply to (but are not limited to) for example, radar and optics (two specific applications of electromagnetics) and sonar. The present invention is directed to techniques which minimize the effects of theoretical sidelobes in order to provide enhanced images.
Backprojector techniques also allow for non-uniform spacing of the projection data. The non-uniform spacing is directly accounted for in the index generation, which is important when compensating for aircraft motion.
Conventional time domain image formation, or backprojection, from SAR data, is accomplished by coherently summing the sampled radar returns for each pixel. In this context, coherent summation can be thought of as time-shifting the signal obtained at each aperture position (to align them to a particular pixel) and adding across all aperture, positions to integrate the value at that pixel. This time-align-and-sum sequence is repeated for every pixel in the image.
A method and system for forming images by backprojection is explained in U.S. Pat. No. 5,805,098 to McCorkle, hereby incorporated by reference as though fully rewritten herein.
FIG. 1A illustrates an example utilizing the basic concept of the backprojection imaging algorithm. The radar is mounted on a moving platform. It transmits radar signals to illuminate the area of interest and receives, return signals from the area. Using the motion of the platform, the radar collects K data records along its path (or aperture). In general the aperture could be a line, a curve, a circle, or any arbitrary shape. The receiving element k from the aperture is located at the coordinate (xp(j), yp(j), zp(j)). For bistatic radar (the transmitting antenna is separate; from the receiving antenna) the transmitting element k from the aperture is located at the coordinate (xT(k), yT(k), zT(k)). For monostatic radar (the transmitting antenna is the same as or co-located with the receiving antenna) the transmitting coordinates (xT(k), yT(k), zT(k)) would be the same as the receiving coordinates (xR(k), yR(k), zR(k)). Since the monostatic radar case is a special case of the bistatic radar configuration, the algorithm described here is applicable for both configurations. The returned radar signal at this receiving element k is sk(t). In order to form an image from the area of interest, we form an imaging grid that consists of N image pixels. Each pixel Pi from the imaging grid is located at coordinate (xp(j), yp(j), zp(j)). The imaging grid is usually defined as a 2-D rectangular shape. In general, however, the image grid could be arbitrary. For example, a 3-D imaging grid would be formed for, ground penetration radar to detect targets and structures buried underground. Another example is 3-D image of inside human body. Each measured range profile sk(t) is corrected for the R2 propagation loss, i.e. sk′(t)=R2(t)sk(t), where
      R    ⁡          (      t      )        =      ct    2  and c=2.997 e8m/sec. The backprojection value at pixel P(i) is
                                          P            ⁡                          (              j              )                                =                                    ∑                              k                =                1                            K                        ⁢                                          w                k                            ⁢                                                s                  k                  ′                                ⁡                                  (                                      f                    ⁡                                          (                                              j                        ,                        k                                            )                                                        )                                                                    ,                  1          ≤          j          ≤          M                                    (        1        )            where wk is the weight factor and f(j,k) is the delay index to sk(t) necessary to coherently integrate the value for pixel P(i) from the measured data at receiving element k.
The index is computed using the round-trip distance between the transmitting element, the image (pixel), and the receiving element. The transmitting element is located at the coordinate (xT(k), yT(k), zT(k)). The distance between the transmitting element and the image pixel P(i) is:d1(i,k)=√{square root over ([(xT(k)−xp(i))]2+[(yT(k)−yp(i))]2+[(zT(k)−zp(i))]2)}{square root over ([(xT(k)−xp(i))]2+[(yT(k)−yp(i))]2+[(zT(k)−zp(i))]2)}{square root over ([(xT(k)−xp(i))]2+[(yT(k)−yp(i))]2+[(zT(k)−zp(i))]2)}{square root over ([(xT(k)−xp(i))]2+[(yT(k)−yp(i))]2+[(zT(k)−zp(i))]2)}{square root over ([(xT(k)−xp(i))]2+[(yT(k)−yp(i))]2+[(zT(k)−zp(i))]2)}{square root over ([(xT(k)−xp(i))]2+[(yT(k)−yp(i))]2+[(zT(k)−zp(i))]2)}  (2)
The distance between the receiving element and the image pixel P(i) isd2(i,k)=√{square root over ([(xR(k)−xp(i))]2+[(yR(k)−yp(i))]2+[(zR(k)−zp(i))]2)}{square root over ([(xR(k)−xp(i))]2+[(yR(k)−yp(i))]2+[(zR(k)−zp(i))]2)}{square root over ([(xR(k)−xp(i))]2+[(yR(k)−yp(i))]2+[(zR(k)−zp(i))]2)}{square root over ([(xR(k)−xp(i))]2+[(yR(k)−yp(i))]2+[(zR(k)−zp(i))]2)}{square root over ([(xR(k)−xp(i))]2+[(yR(k)−yp(i))]2+[(zR(k)−zp(i))]2)}{square root over ([(xR(k)−xp(i))]2+[(yR(k)−yp(i))]2+[(zR(k)−zp(i))]2)}  (3)The total distance isd(i,k)=d1(i,k)+d2(i,k)  (4)The delay index is
                              f          ⁡                      (                          i              ,              k                        )                          =                              d            ⁡                          (                              i                ,                k                            )                                c                                    (        5        )            
FIG. 1B illustrates a typical imaging geometry for an ultra wide band forward looking (e.g., SIRE) radar. In this case, the radar is configured in forward-looking mode instead of side-looking mode as illustrated in FIG. 1A. In this forward-looking mode, the radar travels and radiates energy in the same direction. The general backprojection algorithm described above applies to the embodiment of FIG. 1B. As seen in FIG. 1B, the radar travels in parallel to the x-axis. The backprojection image formation is combined with the mosaic technique. The large area image is divided into sub-images. The size of each sub-image may be, for example, 25 m in cross-range and only 2 m in down-range (x-axis direction). The radar starts at coordinate A, which is 20 m from sub-image 1, and illuminates the entire image area to the right.
The following is a description of the platform 10 in FIG. 1B as it passes four sequential positions 10A, 10B 10C & 10D located at x-coordinates A, B, C & D, respectively. The formation of the first sub-image begins when platform 10 is at the coordinate A, 20 meters from the block labeled “1st sub-image.” As platform 10 travels in the x direction (as shown in FIG. 1B), signals emitted from platform 10 illuminates an entire image area to the right of platform 10, and the reflected signals are received by an array of 16 physical receiving antennas 11 positioned, on the front of the platform 10. Formation of the first sub-image ends when platform 10 reaches coordinate C, at approximately 8 m from the block labeled “1st sub-image.” Accordingly, the radar signal data for the first (full-resolution) sub-image is received, as radar platform 10 travels a distance of 12 meters (20 m−8 m=12 m) from coordinates A to C, for formation of a two dimensional (2D) aperture.
The distance traveled during the formation of the two-dimensional (2-D) aperture is represented by an arrow in FIG. 1B labeled “Aperture 1.” When the platform 10 reaches coordinate B, a distance of 2 meters from coordinate A in FIG. 1B, the formation of the “2nd sub-image” begins and as the platform 10 travels to coordinate D, it uses the received data to form a second 2-D aperture. The distance traveled by platform 10 is represented by an arrow in FIG. 1B labeled “Aperture 2.” Note that the two apertures are overlapped by 10 m and the second aperture is “advanced” by 2 m with respect to the first aperture. Sub-images 1 and 2 are formed from the 2-D apertures using the same length of travel (12 meters) of the radar. This process is applied to ensure that image pixels have almost the same (within a specified tolerance) resolution across the entire large area. The sub-images are formed from the radar range profiles using the back-projection algorithm.
The term “noise” as used herein relates to image noise. There are many sources that cause noise in the resulting image. Noise can be divided into two categories additive noise and multiplicative noise. System noise, thermal noise, quantization noise, self-interference noise, radio frequency interference (RFI) noise are some examples of the additive noise. Multiplicative noise is much more difficult to deal with since it is data dependent. Some sources that cause multiplicative noise include: timing jitter in data sampling, small aperture size compared to image area, the under-sampling of aperture samples, the non-uniform spacing between aperture samples, errors in position measurement system, etc. Multiplicative noise results in undesired sidelobes that create high noise floor in the image and thus limit the ability to detect targets with smaller amplitudes.