The main obstacle for accurate 3D tracking is the presence of the sensors' line of sight (LOS) biases which are usually ˜10× or up to 100× (depending on application) larger than the measurement errors in the sensors' focal planes (FPs). Some common LOS biases are caused by the sensor installation on the platform, misalignments between FP and the inertial measurement unit (IMU) including misalignments between focal planes of multi-band sensors; by uncertainties in positions of the sensors; by atmospheric refraction effects (including multi-band dependencies); and by time synchronization errors between multiple sensors. These LOS bias errors are hard to characterize statistically (unlike random measurement noise) and therefore the robustness of the estimation process is decreased when a lack of knowledge of the statistics of the bias errors exists. To reduce the LOS biases, current solutions include star calibration, which uses the angular star measurements for each sensor to estimate its absolute LOS via the Stellar-Inertial LOS estimation algorithm. But, in general, the inclusion of LOS biases (even reduced via the star calibration) in the 3D tracking processes results in decreased performance of the tracker, for example Multiple-Hypotheses Tracking (MHT) algorithm (in terms of error covariances for the state-vector and probabilities for multiple hypotheses).
The problem of multi-sensor, multi-target 3D fusion has been of practical interest for some time. Typical approaches are based on the two major stages: target/feature association; and target/feature tracking in 3D. “Target” is defined herein as an object already confirmed to be a target and can be subsequently tracked. “Feature” is defined as any object (e.g., clutter) which is considered to be a candidate for being declared to be a target and thus needs to be associated and tracked across the sensors to confirms that it is in fact a real target or not. Hereinafter, “target” as used refers to a target, a feature or both. At the target association stage, the closest approach (triangulation) is used to intersect the line of sight (LOS) from each of two (or more) sensors to the potential target and generate the best associations of 2D tracks in order to estimate the initial 3D positions of the targets as viewed by the multiple sensors. There are various data association algorithms, such as the Munkres algorithm, which is based on the efficient handling of target-pair combinatorics. This approach mitigates the LOS biases via global optimization in the azimuth/elevation space or miss distance between the LOS
At the tracking stage a powerful MHT framework is often used for tracking multiple targets over time and continuing associations of tracks from multiple sensors. At this stage, the measurement models are usually linearized and an Extended Kalman-type Filter (EKF) as a part of the MHT framework is used for each target to estimate an extended state vector, which includes the targets' positions and velocities as well as LOS biases (modelled by first- or higher-order Markov shaping filters). However, Kalman-type filtering is a computationally expensive and complex process when the number of targets is large and therefore a large covariance matrix is needed for the state-vector of each target and the common biases vector.
Typically, the effect of LOS biases is more severe for narrow/medium field-of-view (FOV) sensors (e.g., FOV<10°), when the goal is to fully utilize the high-resolution of the pixel. This is the typical case for the modern electro-optical or infrared (EO/IR) remote sensors. In any case, it is highly desirable to isolate LOS biases, because, unlike measurement noise, they are difficult to characterize (correlated in time) and are unpredictable. Any mismatch in their statistical modeling can result into divergent 3D estimates.
Accordingly, there is a need for a more efficient, more flexible, less computationally complex and higher quality approach to multi-sensor, multi-target 3D fusion for an EO/IR sensor system.