In high-end radar systems, the reflected signals from radar emission are amplified, and then filtered to extract a sequence of 2D input image in the form of cells in an image coordinate system. In radar terminology the cells correspond to pixels in conventional images. Each cell corresponds to an intensity (power) of the received signal in a particular spatial location in world coordinate system, defined by a range (bins) and an azimuth (beams). In other words, the coordinates of the cells in the image coordinate system correspond to locations in the world coordinate system.
In addition to a potential target reflection signal, the image also includes noise, electromagnetic interference, and clutter. It is extremely difficult to detect very small targets in noisy environments. The difficulty can be compared to the classical “finding the needle in the haystack problem.”
Most simple methods apply a threshold to the input image and label the cells exceeding the threshold value as corresponding to candidate targets. If the threshold is too low, then more targets are detected, at the expense of an increased numbers of false alarms. Conversely, if the threshold is relatively large, then fewer targets are detected, but the number of false alarms is relatively small.
Often the threshold is set to achieve a constant false alarm rate (CFAR) by adaptively estimating the level of the noise floor around the cell using background statistics. This is acceptable as long as the signal-to-noise (SNR) and signal-to-clutter (SCR) ratios are sufficiently large. However, for a lower SNR, where targets cannot be easily distinguished from the cluttered noisy background, such cell thresholding approaches give large rates of false detections.
FIG. 1 illustrates the problem solved by the invention. FIG. 1 has a sequence of four images of radar measurements when SNR=20 dB and SCR=7 dB. The most recent image is a time t1, and the earliest image is at tT. There is a single target in the center of each image. Even for these small 21×21 cell images, the target signal cannot be easily identified. In real applications, the image size is 1000×100 cell, which means targets are even less distinguishable from the background noise and clutter.
Instead of making a decision solely based on the current image, detectors can be supplied with a temporal window of previous measurements to allow the detection of targets when the SNR is small. In the example shown, the temporal window includes the current and three previous images. Evidence of there being a target is accumulated by integrating likelihoods of individual cells over time in the temporal window. In other words, hypothetical targets are tracked before the targets are detected. This class of methods is often called track-before-detect (TBD).
Ideally, the evidence accumulation is performed by evaluating all possible states of a dynamic and intrinsic evolution of the target. Here the state of the target can correspond, for example, to the position and velocity of the target in the image and the intensity of the underlying cell. For simplicity, the state evolution is usually modeled by a linear process, especially when the temporal window duration is short. However, the input image is a stochastically sampled process and has only a nonlinear relation with the target state, albeit the target distribution characteristics are assumed to be available. In addition, cell responses with high intensities are only weakly correlated to the locations of the targets. As a result, an analytically intractable number of states can be generated for most basic specifications.
One way to make this problem feasible is to quantize the state space and use discrete valued target models. Several grid methods have been developed to estimate the evidence in discrete space including a Bayesian maximum a posteriori MAP estimator, a maximum likelihood (ML) estimator, or a statistical graph networks, e.g. hidden Markov models (HMM).
The Bayesian estimator is an approximation to the posterior distribution of the target state. On a uniformly spaced set of states, which is augmented with a null state to indicate the possibility of no target case, the estimator applies the Bayes rule by imposing certain heuristics on the state transition probability, and marginal likelihoods, e.g., the parameters of the probability of target existence and the probability of target discontinuation control the detection performance. The parameters can be adjusted to optimize a performance of the detector.
The selection of the quantization steps is a trade off between estimation accuracy, which improves with finer resolution, and computational requirements. The Bayesian estimator selects the state with a highest probability by recursively defining the probability of the target occupying a particular location by the superposition of all of the possible paths to that position. If the accumulated probability is higher than the null state probability, then a detected target is signaled.
Rather than accumulating the probability from alternate paths, the ML estimator selects the single best path. A quantized state space Viterbi process is designed to determine the most likely sequence of states by maximizing a joint posterior probability of the sequence of states. One advantage is that the Viterbi process always produces an estimate consistent with the dynamic model.
A discrete state space often leads to high computation and memory requirements. An alternative is to use a sequential analogue of a Markov chain Monte Carlo (MCMC) batch method, such as particle filter, to accumulate the evidence within the Bayesian framework. MCMS is a numerical approximation technique that uses randomly placed samples instead of fixed grid. The idea is to represent the required posterior density function by a set of random samples with associated weights, and to determine estimates based on these samples and weights.
As the number of samples becomes very large, this characterization becomes an equivalent representation to the usual functional description of the posterior probability density function (PDF), and the particle filter approaches the optimal Bayesian estimate. Although the particle filtering can achieve similar estimation performance for lower cost by using fewer sampling points than are required for a discrete grid, particle filtering usually requires a considerable amount of particles to effectively approximate continuous probabilistic distributions. Thus, the computational burden for high dimensional state spaces, e.g., where acceleration and non-linear motion are parameterized, becomes an issue.
Instead of using a numerical model for the target distribution, a multiple-hypothesis tracker (MHT) imposes a parametric representation to reduce the computational load. The MHT allows a hypothesis to be updated by more than one consecutive state at each update, generating multiple possible hypotheses. With each input image, all existing hypotheses are updated and unlikely hypothesis are deleted to upper bound the computational complexity.
A probabilistic MHT (PMHT) uses a recursive expectation maximization (EM), such as a Kalman filter, to determine, in an optimal way, associations between the measurements and targets, instead of measurement-to-hypothesis assignment. The probability that each measurement as associated with each hypothesis is estimated using the MAP method. In other words, the PMHT uses soft posterior probability associations between measurements and targets. These soft associations can be considered as mapping the problem from discrete, i.e., of combinatorial complexity, to continuous, i.e., amenable to iterative procedures.
In a histogram PMHT (H-PMHT), the received energy in each cell is quantized, and the resulting integer is treated as a count of the number of measurements that are within that cell. The sum over all of the cells is the total number of measurements taken. A probability mass function for these discrete measurements is modeled as a multinomial distribution, where the probability mass for each cell is the superposition of target and noise contributions.
Rather than using the entire input image, maximum likelihood joint probabilistic data association (JPDA) reduces the threshold to a low level, and then applies a grid-based state model for estimation to avoid track coalescence. Another approach to detect targets in the TBD manner is to apply a state parameter mapping, e.g., a Hough transform, after quantizing the parameters.
In addition to being computationally expensive, the above prior art methods assume the signal, clutter, and noise distribution functions to be known due to their dependency on the likelihood ratio function. Furthermore, those methods impose single-stage Markovian updates, as particle filters, for the determination of the cell likelihoods, even though a larger portion of the previous measurements is often available.