1. Field of the Invention
The present invention relates generally to image processing. It particularly relates to an image processing target detection system and method that uses adaptive spatial filtering and time-differencing processes to detect and track targets within various background environments.
2. Description of the Related Art
Passive IR (Infrared) sensors are widely used to detect the energy emitted from targets, backgrounds, incoming threats, and the atmosphere for a plurality of applications including military surveillance, missile target and detection systems, crop and forest management, weather forecasting, and other applications. The measures of performance for passive IR sensors include signal-to-noise ratio (S/N), radiation contrast, noise-equivalent temperature difference (NEDT), minimum resolvable temperature difference, and other parameters. These sensors may be designed to enhance one or more of these parameters for optimum performance during a particular application.
Particularly, one type of passive IR sensor, the IRST sensor (Infrared search and track), locates and tracks objects by capturing the energy emitted within the field of view (FOV) or field of regard (FOR) of the sensor. However, IRST sensors are commonly designed to operate with a small noise-equivalent temperature difference (NEDT) to detect small target-to-background contrast temperatures, and therefore heavy background clutter may strongly hinder accurate target detection and tracking and lead to a higher probability of false alarm (Pfa). Importantly for threat detection applications, it is useful for the IRST sensor to detect, declare, and track airborne targets at a long distance (usually larger than 50 km) before the threat can see the intended target, and therefore the IRST sensor performance may be enhanced using a large instantaneous field of view (e.g., 360-degree hemisphere in azimuth and 50 to 90 degrees in elevation). However, the large number of scene pixels produced by an IRST sensor may require computer-controlled image data processing to separate the large number of false targets from the true targets. As shown in FIG. 1A, a common target detection and tracking scenario for military applications may be a fighter jet 109 attempting to detect and track incoming fighter jets 122 and/or incoming missiles (bombs) 124 that may be enemy-controlled.
Commonly, the IRST sensor uses two image data processing techniques for target (threat) detection and tracking which include SpatialIRST and ChangeIRST. FIG. 1B illustrates an exemplary SpatialIRST image processing system 100 found in the prior art. During operation, an image 102 input from an IR sensor (not shown) is initially spatially convolved by a matched filter 104 to generate a spatially filtered image output. The matched filter 104 may be generally designed using a well-known system point spread function (PSF) since at a long distance an incoming airborne target may be considered as a point radiant source. A point spread function maps the intensity distribution for the received signal at the sensor generated from the point source of light (airborne target at a long distance). The spatially filtered output may be divided by a local background estimation (provided by an estimator 106) using a divider 108 which provides an output image to a CFAR (constant false alarm rate) detector 110. Use of a CFAR detector allows for setting of one or more detection threshold levels to provide a maximum (tolerable) false alarm rate. The detector 110 provides an output signal 112 indicating detection.
However, SpatialIRST may produce a large number of false alarms when the background clutter contains high spatial frequency components. Also, when the background contains both low and heavy clutter sub-regions, traditional SpatialIRST may produce increased false alarms for the heavy clutter sub-regions which also reduces the probability of detection for the low clutter sub-regions.
For light or medium background clutter, generally the SpatialIRST system works well to detect and track targets, but performance suffers with heavy to extremely heavy background clutter (e.g., urban and earth object clutter) leading to a high Pfa. Under these conditions, commonly a ChangeIRST image processing system may be used which employs a temporal time-differencing image processing technique which is useful for moving (e.g., airborne) targets. FIG. 2 illustrates an exemplary ChangeIRST image processing system 200 found in the prior art. During operation, a reference image (current image frame) 202 and a previous image (the search image) 204 are filtered using a high-pass filter 206 and pixel-wisely registered using a registering device 208 at a particular re-visit time (RT). Pixel registration is a well-known technique to align the received images for the same scene. Commonly, a base image is used as a comparison reference for at least one other (input) image, and the registration process brings the input image into alignment with the base image by applying a spatial transformation to the input image. Using a subtractor 210, the registered search image may be subtracted from the reference image to suppress background clutter, and the output difference image may be fed to a CFAR (constant false alarm rate) detector 212 to generate a detection output signal 214.
Alternatively, another ChangeIRST image processing system 300 found in the prior art may be used as shown in FIG. 3. During operation of the alternative arrangement 300, an original large image 302 is under-sampled using a sampler 304 into a smaller matrix containing match point elements. These match point elements are registered using registering device 208, and the registration locations are interpolated back to the original space-domain. After interpolation, operation continues similar to FIG. 2 with the subtractor 210 to generate a difference signal input to CFAR detector 212. This alternative ChangeIRST arrangement 300 uses a multi-resolution approach to reduce the throughput (computing load) requirement for the image registration. However, the registration accuracy is decreased.
International Patent Application Number PCT/US2004/005325, filed Feb. 24, 2004, entitled “A METHOD AND SYSTEM FOR ADAPTIVE TARGET DETECTION” discloses an adaptive long range IRST detection processor that contains an adaptive spatial filtering process (Adaptive SpatialIRST) as well as a spot time-differencing process (Spot ChangeIRST) for heavy background clutter suppression.
FIG. 4 is a flow process diagram of an exemplary adaptive IRST image processing system disclosed in International Patent Application Number PCT/US2004/005325. Advantageously, a controller may be used to control the flow process steps of the IRST imaging system. At step 402, a reference (current) image frame and a search (previous) image frame may be input, from an IRST sensor, into the system using a receiver and undergo image pre-processing including noise filtering and other pre-processing.
In an exemplary embodiment, the reference image may be received at a time (t) and the previous image may be received at a previous time (t−n). At step 404, the reference image is input to an adaptive spatial filtering path (further described below in reference to FIG. 5) for detection of an object within the sensor field of view (e.g., impending threat such as launched missiles, etc.). At step 406, a decision block is reached where it is determined whether the background clutter in the field of view qualifies as high (heavy) clutter in accordance with a predetermined threshold. If yes, then processing continues at step 408 where spot time-differencing processing (spot ChangeIRST) is performed on the reference and search images to reduce the stationary detections due to clutters (such as building and rocks, etc.) and to pass moving detections (such as airborne targets).
Following at step 410, the confirmation detection from the spot time-difference step (step 408) may be combined with the detections with low clutter (“no” decision at step 406) from the spatial filtering step (step 404) to produce a detection summation output. At step 412, extended image processing including classification, identification, and tracking may occur using the summation detection result and the reference image as inputs to initiate and maintain tracking of the detected object.
FIG. 5 is a block diagram of exemplary adaptive IRST image processing system 500 using adaptive spatial filtering. Advantageously, adaptive IRST image processing system 500 may be used for the detection/search scenario illustrated in FIG. 1A to replace the prior art systems 100, 200, 300 shown in FIGS. 1B, 2, 3. A controller 509 may be used to control the operation of the system 500.
As shown in FIG. 5, a reference (current) image frame 502 may be input from an IRST sensor field of view (not shown) to a spatial, matching filter 504 using a receiver 507. Advantageously, spatial filter 504 may perform high-pass filtering using a smaller template (incoming pixel frame size for the filter) which enables faster detection by requiring less processing than for larger size templates. The filter 504 operates to use a previously detected object (e.g., tank) as the center for the succeeding pixel frame of a limited size (smaller template) which accelerates accurate correlation and detection. Also, spatial filter 504 may subtract the original image from a local mean to function as an anti-mean high-pass filter.
Additionally, a background estimator 506 may estimate the noise of the background clutter of the IRST sensor field of view using the same anti-mean filter 504 or using a different high-pass filter (e.g., the filter of a point spread function), and divide (using divider 508) the filtered image data input by the background noise estimation to produce an output image signal input to a CFAR (constant false alarm rate) detector 510. Use of a CFAR detector allows for setting of one or more detection threshold levels to provide a maximum (tolerable) false alarm rate for the system 500. Advantageously, anti-mean filter 504 with a smaller template may reduce the false alarm rate when the background clutter of the sensor field of view contains high frequency components.
Also, the reference image data 502 may be input to a local/regional sigma (standard noise deviation) estimator 512 to help estimate the standard deviation for noise within the background clutter for the field of view. The estimator 512 divides the image data 502 into a plurality of different spatial sub-regions and determines (measures) the SNR and standard noise deviation for each sub-region including a local region. Following the estimator 512, threshold device 514 may set the SNR threshold levels for each sub-region based on the measurements of the estimator 512. Following, the CFAR detector 510 may receive the noise estimation and SNR threshold levels, along with the filtered/divided image data signal output, to determine whether an object is detected (e.g., predetermined threat target) within the sensor field of view and produces a detection output signal 516.
Following generation of the detection output signal 516, image processing may continue using the spot time-differencing system 600 of FIG. 6. FIG. 6 is a block diagram of an exemplary adaptive IRST image processing system 600 using spot time-differencing in accordance with an embodiment of the present invention. As shown in FIG. 6, the reference image 502 and a search (previous) image 601 input to the spatial filter 504 of system 500 may be also input to a high-pass filter/background estimator device 602 for filtering and estimating of the noise level for the background clutter across the plurality of sub-regions within the sensor field of view. The processing of system 600 continues if high clutter is determined (step 406 from FIG. 4) for the particular sub-regions since advantageously spot time-differencing will be applied for detection confirmation in only high background clutter sub-regions. Following, the filtered reference and search image data 502, 601 are input to a registrator 604 for registering of pixel data for the input image data 502, 601 for proper alignment of images from the same scene (field of view). The registrator 604 compares the input image data 502, 601 with base image data to determine whether spatial transformation of the input image data is necessary for proper alignment with the base image data. Thereafter, a differencing component 606 may subtract the search image 601 from the reference image 502 to suppress background clutter, and the output difference image 603 fed to a CFAR detector 608 to generate a detection output signal 609 indicating whether an object (e.g., predetermined threat target) is detected in the sensor field of view.
As shown at step 412 of FIG. 4, extended image processing including classification, identification, and tracking may occur using the spatial filtering processing output, time-difference detection output, and original reference image data as inputs to initiate and maintain tracking of the detected object.
Although the adaptive long range IRST techniques of copending International Patent Application Number PCT/US2004/005325 provide a substantial improvement over conventional IRST techniques the adaptive long range IRST techniques are susceptible to temporal noise and random phasing of the same object falling down to different subpixel locations in the previous and current image frames. Accordingly, there is a need to provide an improved IRST system that is less subject to temporal noise and that provides high probability of detection in various background environments (light, medium, or heavy clutter) while maintaining low probability of false alarm.