A wide field-of-view and rapid response to threats are critical components of any surveillance system. A wide field-of-view is normally implemented by articulating a camera: allowing it to swivel to pan and tilt, and actively zooming in on “interesting” locations. Since a single camera suffers from the “soda straw” problem, where only a small portion of the scene can be examined at any given time leaving the rest of the scene unwatched, surveillance systems often employ a radar unit to direct the operator to likely targets. This provides direction to the search but still poses a security risk, since potentially hazardous activities might be occurring in an unwatched portion of the field-of-view while the operator is investigating another incident (either coincidental or intentionally distracting).
There are a number of security systems that combine camera and radar that are designed for ground-level surveillance. Among them are (1) the Night Vision Labs Cerberus Scout manufactured by MTEQ located at 140 Technology Park Drive, Kilmarnock, Va.; (2) the Blighter Explorer manufactured by Blighter Surveillance Systems located at The Plextek Building, London Road, Great Chesterford, Essex, CB10 1NY, United Kingdom; (3) the Honeywell Radar Video Surveillance (RVS) system manufactured by Honeywell, which is located at 2700 Blankenbaker Pkwy, Suite 150, Louisville, Ky. 40299; and (4) the U.S. Army's COSFPS (a.k.a. the “Kraken”). While these systems are manufactured by different corporations and employ different combinations of sensors and “size, weight, and power” (SWAP) constraints for deployment, they all share a common theme; each system contains a radar that scans for targets, a camera or cameras (e.g., electro-optical (EO) and infrared (IR)) that can mechanically slew and zoom to regions of interest (most likely as the result of a radar message), and an operator console that allows a human operator to either automatically slew to radar hits or examine other locations by manually controlling the camera. Existing systems will suffer from the same limitations: they are manual and the combined radar-camera system only allows visually examine a subset of the radar hits. Adding more cameras may improve the ability to visually detect more targets, but the system complexity and cost will go up.
There is published prior art to automate the radar-camera system for object detection and recognition. For example, van den Broek et al. (see the List of Incorporated Cited Literature References, Literature Reference No. 1) proposes a system that employs simultaneous use of a visual sensor (camera) and radar to perform surveillance operations and automatic target recognition (ATR). The van den Broek system focuses on behavior recognition using a combination of radar and wide field-of-view video cameras. Additionally, their system uses a radar unit to determine the azimuth and distance of a target, and then employs a video camera to capture target video for analysis by the ATR system. However, the van den Broek system does not try to optimize the camera slew/zoom to maximize the number of radar hits that can be visually examined by an operator (or even ATR for that matter).
Additionally, Schwering et al. (see Literature Reference No. 2) employs a panoramic array of heterogeneous cameras (as opposed to a single camera), radar, and an “automated identification system” (AIS) consisting of tracking and classification mechanisms. The panoramic camera array is reactive to radar messages, meaning that the camera is idle until a radar message is received, and then slews to the target location after the radar hit has been received. Such a reactive process can lead to missing targets that move very quickly, or missing targets when multiple radar hits are received simultaneously, making it impossible for a human operator to slew the camera and identify all of the targets prior to their movement away from their original locations.
Other bodies of work (see Literature Reference Nos. 3 and 4) deal with scheduling multiple pan-tilt-zoom (PTZ) cameras for detection and tracking, which is fundamentally very different due to multiple cameras. The emphasis is on tracking (as opposed to detection) and only one object can be tracked by one PTZ camera at a time.
Thus, a continuing need exists for a system that can detect multiple objects at once with a single camera which also optimizes camera slew and zoom to maximize the number of radar hits that can be visually examined by an operator.