Systems are available to autonomously gather high resolution visible band imagery over wide areas of terrain under surveillance, and to process these images to detect objects of potential interest.
However, if the scene under surveillance is complex, a large number of potential targets may be identified, not all of which are actually of interest. Those identified objects that are not of interest are typically referred to as “clutter”.
A system may be made more complex to improve its ability to reject clutter. However, there is an associated computational cost that may not be acceptable in a resource limited system, for instance, in an airborne application.
A conventional alternative approach is to have a post detection phase of processing for clutter rejection, so that initial, relatively crude discrimination is achieved by an initial detection stage, followed by more sophisticated discrimination at a later clutter rejection stage. The clutter rejection process is only applied to image regions of interest identified by the detection phase, thus limiting the computational cost of this phase.
Clutter rejection algorithms have been developed by a number of investigators for different optical sensor applications. However, these systems make use of manual cues for a target, or of differences between target and background size and dynamics to reject clutter (and so tend not to be suitable for autonomous sensor system capturing images at a low frame rate). Other existing clutter rejection systems use large amounts of data to determine the characteristics of clutter and target objects that allow reliable discrimination to occur (and so tend to need large amounts of training data, and/or time consuming algorithm training or re-training to accommodate different operating environments).