The present invention relates to the generation of images from projection measurements. Examples of images generated from projection measurements include two-dimensional and three-dimensional SAR (synthetic aperture radar) systems. SAR is a form of radar in which the large, highly-directional rotating antenna used by conventional radar is replaced with many low-directivity small stationary antennas scattered over some area near or around the target area. The many echo waveforms received at the different antenna positions are post-processed to resolve the target. SAR can be implemented by moving one or more antennas over relatively immobile targets, by placing multiple stationary antennas over a relatively large area, or combinations thereof. A further example of images generated from projection measurements are ISAR (inverse SAR) systems, which image objects and many features on the ground from satellites, aircraft, vehicles or any other moving platform. SAR and ISAR systems are used in detecting, locating and sometimes identifying ships, ground vehicles, mines, buried pipes, roadway faults, tunnels, leaking buried pipes, etc., as well as discovering and measuring geological features, forest features, mining volumes, etc., and general mapping. For example, as shown in FIG. 1 of U.S. Pat. No. 5,805,098 to McCorkle, hereby incorporated by reference, an aircraft mounted detector array is utilized to take ground radar measurements. Other examples of systems using projection measurements are fault inspection systems using acoustic imaging, submarine sonar for imaging underwater objects, seismic imaging system for tunnel detection, oil exploration, geological surveys, etc., and medical diagnostic tools such as sonograms; echocardiograms, x-ray CAT (computer-aided tomography) equipment and MRI (magnetic resonance imaging) equipment.
FIG. 1A illustrates an example utilizing the basic concept of the backprojection imaging algorithm. The radar is mounted on a moving platform. It transmits radar signals to illuminate the area of interest and receives return signals from the area. Using the motion of the platform, the radar collects K data records along its path (or aperture). In general the aperture could be a line, a curve, a circle, or any arbitrary shape. The receiving element k from the aperture is located at the coordinate (xR(k), yR(k), zR(k)). For bistatic radar (the transmitting antenna is separate from the receiving antenna) the transmitting element k from the aperture is located at the coordinate (xT(k), yT(k), zT(k)). For monostatic radar (the transmitting antenna is the same as or co-located with the receiving antenna) the transmitting coordinates (xT(k), yT(k), zT(k)) would be the same as the receiving coordinates (xR(A), yR(k), zR(k)). Since the monostatic radar case is a special case of the bistatic radar configuration, the algorithm described here is applicable for both configurations. The returned radar signal at this receiving element k is sk(t). In order to form an image from the area of interest, an imaging grid is formed that consists of N image pixels. Each pixel Pi from the imaging grid is located at coordinate (xp(i), yp(i), zp(i)). The imaging grid is usually defined as a 2-D rectangular shape. In general, however, the image grid could be arbitrary. For example, a 3-D imaging grid would be formed for ground penetration radar to detect targets and structures buried underground. Another example is 3-D image of inside human body. Each measured range profile sk(t) is corrected for the R2 propagation loss, i.e. sk′(t)=R2(r)sk(t), where
      R    ⁡          (      t      )        =      ct    2  and c=2.997 e8 m/sec. The backprojection value at pixel P(i) is
                                          P            ⁡                          (              i              )                                =                                    ∑                              k                =                1                            K                        ⁢                                                  ⁢                                          w                k                            ⁢                                                s                  k                  ′                                ⁡                                  (                                      f                    ⁡                                          (                                              i                        ,                        k                                            )                                                        )                                                                    ,                  1          ≦          i          ≦          N                                    (        1        )            
where wk is the weight factor and f(i,k) is the delay index to sk′(t) necessary to coherently integrate the value for pixel P(i) from the measured data at receiving element k.
The index is computed using the round-trip distance between the transmitting element, the target point forming the image (pixel), and the receiving element. The transmitting element is located at the coordinate (xT(k), yT(k), zT(k)). The distance between the transmitting element and the target point forming the image pixel P(i) is:
                                          d            1                    ⁡                      (                          i              ,              k                        )                          =                                                            [                                  (                                                                                    x                        T                                            ⁡                                              (                        k                        )                                                              -                                                                  x                        P                                            ⁡                                              (                        i                        )                                                                              )                                ]                            2                        +                                          [                                  (                                                                                    y                        T                                            ⁡                                              (                        k                        )                                                              -                                                                  y                        P                                            ⁡                                              (                        i                        )                                                                              )                                ]                            2                        +                                          [                                  (                                                                                    z                        T                                            ⁡                                              (                        k                        )                                                              -                                                                  z                        P                                            ⁡                                              (                        i                        )                                                                              )                                ]                            2                                                          (        2        )            
The distance between the receiving element and the target point forming the image pixel P(i) is
                                          d            2                    ⁡                      (                          i              ,              k                        )                          =                                                            [                                  (                                                                                    x                        R                                            ⁡                                              (                        k                        )                                                              -                                                                  x                        P                                            ⁡                                              (                        i                        )                                                                              )                                ]                            2                        +                                          [                                  (                                                                                    y                        R                                            ⁡                                              (                        k                        )                                                              -                                                                  y                        P                                            ⁡                                              (                        i                        )                                                                              )                                ]                            2                        +                                          [                                  (                                                                                    z                        R                                            ⁡                                              (                        k                        )                                                              -                                                                  z                        P                                            ⁡                                              (                        i                        )                                                                              )                                ]                            2                                                          (        3        )            
The total distance isd(i,k)=d1(i,k)+d2(i,k)  (4)
The delay index is
                              f          ⁡                      (                          i              ,              k                        )                          =                              d            ⁡                          (                              i                ,                k                            )                                c                                    (        5        )            
FIG. 1B illustrates a typical imaging geometry for an ultra wide band forward looking (e.g., SIRE) radar. In this case, the radar is configured in forward-looking mode instead of side-looking mode as illustrated in FIG. 1A. In this forward-looking mode, the radar travels and radiates energy in the same direction. The general backprojection algorithm applies to the embodiment of FIG. 1B. As seen in FIG. 1B, the radar travels in parallel to the x-axis. The backprojection image formation is combined with the mosaic technique. The large area image is divided into sub-images. The size of each sub-image may be, for example, 25 m in cross-range and only 2 m in down-range (x-axis direction). The radar starts at coordinate A, which is 20 m from sub-image 1, and illuminates the entire image area to the right.
The height of the vehicle mounted radar may be approximately 2 m from the ground. The imaging center may be located at approximately 20 in from the radar, but for various target resolutions, the imaging centers may be located at various ranges. For simplicity, a fixed range is used for to form imagery and the motion of the vehicle is not shown. In practice, however, imagery is formed using the physical aperture of the antenna array and the synthetic aperture (SAR) generated by the forward motion of the vehicle. This two-dimensional aperture gives not only the crossrange resolution (from physical aperture of the antenna array) but also the height resolution (from the forward motion) and thus results in a 3-dimensional image (see, Nguyen, L. H.; Ton, T. T.; Wong, D. C.; Ressler, M. A. Signal Processing Techniques for Forward Imaging Using Ultrawideband Synthetic Aperture Radar. Proceedings of SPIE 5083, 505, (2003), hereby incorporated by reference. This approach also provides integration to achieve a better signal-to-noise ratio in the resulting image.
The following is a description of the platform 10 in FIG. 1B as it passes four sequential positions 10A, 10B 10C & 10D located at x-coordinates A, B, C & D, respectively. The formation of the first sub-image begins when platform 10 is at the coordinate A, 20 meters from the block labeled “1st sub-image.” As platform 10 travels in the x direction (as shown in FIG. 1B), signals emitted from platform 10 illuminates an entire image area to the right of platform 10, and the reflected signals are received by an array of 16 physical receiving antennas 11 positioned on the front of the platform 10. Formation of the first sub-image ends when platform 10 reaches coordinate C, at approximately 8 m from the block labeled “1st sub-image.” Accordingly, the radar signal data for the first (full-resolution) sub-image is received as radar platform 10 travels a distance of 12 meters (20 m−8 m=12 m) from coordinates A to C, for formation of a two dimensional (2D) aperture.
The distance traveled during the formation of the two-dimensional (2-D) aperture is represented by an arrow in FIG. 1B labeled “Aperture 1.” When the platform 10 reaches coordinate B, a distance of 2 meters from coordinate A in FIG. 1B, the formation of the “2nd sub-image” begins, and as the platform 10 travels to coordinate D, it uses the received data to form a second 2-D aperture. The distance traveled by platform 10 is represented by an arrow in FIG. 1B labeled “Aperture 2.” Note that the two apertures are overlapped by 10 m and the second aperture is “advanced” by 2 m with respect to the first aperture. Sub-images 1 and 2 are formed from the 2-D apertures using the same length of travel (12 meters) of the radar. This process is applied to ensure that image pixels have almost the same (within a specified tolerance) resolution across the entire large area. The sub-images are formed from the radar range profiles using the back-projection algorithm.
FIG. 2 schematically diagrams the back-projection algorithm applied to form a Sub-image. The procedure mathematically described with respect to FIG. 1A in the above paragraphs may also be applied to this imaging scenario. In this case, the radar aperture is a rectangular array that is formed by an array of 16 receiving elements (that spans 2 meters) and the forward motion of the platform (12 meter for forming each sub-image). The imaging grid in this case is defined as a rectangular array of 25×2 meter. Further details may be found in U.S. Pat. No. 7,796,829, hereby incorporated by reference.
Many applications such as radar and communication systems have been exploiting features that wide-bandwidth signals offer. However, the implementation of front-end receivers to directly digitize these wide-bandwidth signals requires that the analog to digital converter (ADC) digitize the wide-bandwidth signals at a frequency above the minimum Nyquist rate. According to Wikipedia, the Nyquist theorem shows that an analog signal can be perfectly reconstructed from an infinite sequence of samples if the sampling rate exceeds 2B samples per second where B is the highest frequency of the original signal. In the case of time-domain impulse-based Ultra-Wideband (UWB) radar, to be above the Nyquist rate the clock speed of the front-end analog-to-digital converter (ADC) must be higher than twice the highest frequency content of the wide-bandwidth signal having a bandwidth from 300 MHz to 3000 MHz. In practice, the received radar data is directly digitized at an equivalent sampling rate of 7.720 Giga-samples/sec, which is slightly higher than the required Nyquist sampling rate of at least 6 Giga-samples/sec.
Because of this challenge, state-of-the-art systems employ various time-equivalent sampling techniques that allow the reconstruction of the Wideband-signals from slower sampling rates. These equivalent-time sampling techniques are based on the assumption that a signal waveform is repeatable for many observations. By acquiring the same signal waveform at different phase delays with sub-Nyquist sampling rate, the signal waveform can be reconstructed by interleaving data from individual observations. In other words, these time-equivalent techniques depend on many observations of the same single waveform of interest via interleaving data from individual observations to reconstruct the original information. These techniques do not work if there is only one chance to observe the signal, or the acquired signal is not repeatable from one observation to the next. In addition, since it takes many observations of a signal waveform in order to complete one acquisition cycle (hence the term equivalent time), the effective data acquisition rate is much slower than the real-time data acquisition, which uses analog-to-digital converters (ADCs) that operate at above Nyquist rate. The equivalent-time data acquisition results in many disadvantages such as slower data acquisition time, lower pulse repetition rate (PRF), lower average power, etc. In several practical applications, the assumption that a signal is repeatable for many measurements might not be practical or not even realizable.
In the case of Ultra-Wideband (UWB) radar, which has advantageous penetration capability due to low-frequency contents and the high resolution due to the wide-bandwidth of the transmit signals, a technique referred to as synchronous impulse reconstruction (SIRE) sampling technique has employed an equivalent-time sampling technique that allows the reconstruction of wide-bandwidth signal using analog-to-digital converters (ADCs) operating under the Nyquist rate. The ARL SIRE radar system employs an Analog Devices 12-bit ADC to digitize the returned radar signals. However, the ADC is clocked at the system clock rate of 40 MHz. From the well-known sampling theory, it is not possible to reconstruct the Wide Bandwidth signal (300 Mhz to 3000 Mhz) since the clock rate of the ADC is much slower than the required minimum Nyquist sampling rate (in this case 6000 MHz). However, by using the synchronous time-equivalent sampling technique a much higher effective sampling rate is achieved. FIG. 4 provides a graphical representation of the SIRE acquisition technique. Further details are described in Nguyen, L., “Signal and Image Processing Algorithms for the U.S. Army Research Laboratory Ultra-wideband (UWB) Synchronous Impulse Reconstruction (SIRE) radar,” Army Research Laboratory Technical Report, ARL. TR-4784, April 2009, hereby incorporated by reference, and Ressler, Marc, et al., “The Army Research Laboratory (ARL) Synchronous Impulse Reconstruction (SIRE) Forward-Looking Radar,” Proceedings of SPIE, Unmanned Systems Technology IX, Vol. 6561. May 2007, hereby incorporated by reference.
The analog-to-digital converter (ADC) sampling period is Δt; the value of this parameter in FIG. 4 is 25 ns, which corresponds to the A/D sampling rate of 40 MHz. The number of samples for each range profile is denoted by N, which is equal to 7 in our current configuration. This value corresponds to a range swath of 26 m. The system pulse repetition frequency (PRF) is 1 MHz. The system pulse repetition interval (PRI)—the inverse of PRF—is 1 ms. Each aliased (sub-Nyquist sampling) radar record is measured M times (1024 in the example of FIG. 4) and they are integrated to achieve higher signal-to-noise level. After summing M repeated measurements of the same range profile, the first range (fast-time) bin is increased by Δ. Thus, the next group of M range profiles are digitized with a timing offset of Δ with respect to the transmit pulse. The parameter Δ represents a time sample spacing that satisfies the Nyquist criterion for the transmitted radar signal. This time sample spacing is 129.53 ps, which corresponds to a sampling rate of 7.72 GHz. This effective sampling rate is sufficient for the wide-bandwidth radar signal (300 MHz-3000 MHz). The number of interleaved samples is
      κ    =                  Δ        i            Δ        ,which is 193 in FIG. 4. After K groups of M pulses are transmitted and the return signals are digitized and summed by the Xilinx Spartan 3 field-programmable gate array (FPGA), the result is a raw radar record of N·K samples with an equivalence of fast sample spacing of Δ. The total time to complete one data acquisition cycle is K·M·PRI, which is 197.6 ms in this case. It should be noted that during the entire data acquisition cycle period (197.6 ms), the relative position between the radar and the targets is assumed to be stationary.
As previously mentioned, the advantage of the equivalent-time sampling technique is that it relieves the clock rate requirements for the ADCs. However, there are two major problems. First, the data acquisition time has been significantly increased. Second, the technique is based on the assumption that the signal is the same from one observation to the next. In this case the radar and all targets in the scene must be stationary during the entire data acquisition cycle. If this condition is met, the received signal will be perfectly reconstructed by interleaving the data from many returned waveforms from many observations. However, this assumption does not often hold in practice since the relative position between the radar and the targets is no longer negligible during the data acquisition cycle due to the motion of the radar platform. This is even worse for the forward-looking radar geometry since the radar in this case moves toward the imaging area. Even with the slow speed of the platform (1 mile per hour), the relative motion between the radar and the targets during the data acquisition cycle results in severe phase and shape distortions in the reconstructed signal. This in turn results in poor focus quality and low signal-to-noise level in SAR imagery. Although some of these artifacts can be corrected by signal processing algorithms, this time-equivalent technique would definitely limit the maximum speed of the radar platform.
The SIRE sampling technique, a modified and enhanced version of the equivalent-time sampling technique used in commercial digital storage oscilloscopes and other radar systems, allows the employment of inexpensive A/D converters to digitize the wide-bandwidth signals. However, like other equivalent-time sampling techniques, the basic assumption is that the signal is repeatable from one observation to the next. Thus, by acquiring the same waveform with many observations with different phase offsets, the under-sampled data records are then interleaved for the reconstruction of the equivalent over-sampled data record. This results in many side effects that include the distortion of the captured waveform due to the relative motion of the radar and the target during the data acquisition cycle (returned radar waveforms have been changed during the acquisition cycle). In addition, the equivalent-time data acquisition translates to lower average power and lower effective pulse repetition frequency (PRF) for the SAR system. Other state-of-the-art implementations include the use of multiple ADCs in a parallel configuration to increase the effective sampling rate and reduce the data acquisition time. However, the use of parallel ADCs significantly increases the size, weight, power, and cost of the receiver.