Today, different types of sensor networks are used to create warning systems for natural disasters, such as flooding and land/mud sliding. Frequently, these systems include one or more video devices (i.e., cameras) for monitoring and capturing images of a scene which includes one or more objects or events of interest.
According to these conventional monitoring systems, images captured by a video device are analyzed using standard object detection and modeling techniques, which focus on objects such as human, animals, vehicles, roads, etc. Generally, these techniques are limited to the detection of static and rigid target objects with low variance in color and shape. For example, U.S. Patent Application Publication No. 2001/0008561 (of Paul et al.) describes a tracking method based on color, shape, and motion. In another example, U.S. Patent Application Publication No. 2003/0128298 (of Moon et al.) describes a color-based object tracking system. However, according to both of these exemplary approaches, a static target object must first be identified by an operator and tracked starting from a marked sequence. Further, these monitoring systems are not automated, requiring constant supervision by one or more operators. As such, these systems can not be used to recognize and analyze scenes which include a flow.
Commonly, flows exist in a variety of forms including but not limited to water, mud, animals, people, and vehicles. Further, certain flows can be dangerous or even deadly, thus requiring constant monitoring and supervision. In these cases, a monitoring system must not only detect and identify the flow regions, but also determine the flow motion's behavior over time.
Typically, this motion, defined as the displacement of the flow between two images captured in time, is measured using motion vectors. U.S. Pat. No. 5,682,438 (issued to Kojima et al.) describes an exemplary method for calculating motion vectors. This patent is incorporated herein by reference in its entirety.
Motion vectors are used as the basis of many conventional systems to estimate the motion of an object (see for example, U.S. Pat. Nos. 6,697,427 (issued to Kurak et al.), 6,687,301 (issued to Moschetti), 6,687,295 (issued to Webb et al.), 6,668,020 (issued to Ma et al.), 6,567,469 (issued to Rackett), and 6,563,874 (issued to Lu), etc.). These conventional systems typically approximate the best match of blocks between video frames. The quality of a match between two corresponding blocks is measured by the sum of the absolute difference of corresponding pixels in the blocks, as is known in the art. However, such a matching technique may not result in an accurate estimation of the physical location change of an object over a number of video frames, known as ‘true’ motion.
Exemplary true motion estimation methods are described in an article entitled “An iterative image-registration technique with an Application to Stereo Vision”, DARPA Image Undersing Workshop, pp. 121-130 (1981) by B. D. Lucas et al., and U.S. Pat. No. 5,072,293 (issued to Dehaan et al.), both of which are incorporated herein by reference in their entirety.
While true motion estimation is critical to flow motion detection, systems employing these methods fail to distinguish between flow motion of a targeted object and other ambient non-flow motion, such as the movement of trees, flying birds, rain, etc. These and other foreign objects obstruct the targeted object resulting in significant occlusion problems.
Another drawback to existing video monitoring systems is the lack of efficient alarming capabilities. Specifically, these systems do not provide alarms and warning messages, store and transmit detected signals, or control security devices.
In addition, due to bandwidth limitations, conventional systems can not process and transmit multi-channel video in real-time to multiple locations. Thus, existing systems are limited in that they do not provide an efficient cross-layer optimization wherein the detection of an event and an alarm type are synchronized to transmit alarm data to an operator. Cross-layer optimization techniques, known in the art, apply joint optimization to different layers of a system. For example, in an object extraction system, cross layer optimization may include joint optimization of a low-level feature extraction layer, a high-level feature labeling, and an operator-interface layer to query regarding the object of interest.
In sum, conventional video surveillance systems employ simple motion detection that do not identify and analyze flow motion regions of an image, detect the occurrence of flow events, track the motion of a flow motion region, distinguish between target flow motion and other ambient motion, and provide efficient alarm management.
Accordingly, there is a need in the art for an automated monitoring system that provides accurate real-time flow detection and flow motion analysis with an efficient alarm capability.