1. Field of the Invention
Embodiments of the invention provide techniques for analyzing a sequence of video frames. More particularly, to analyzing and learning behavior based on streaming video data while detecting and responding to out-of-focus video data.
2. Description of the Related Art
Some currently available video surveillance systems provide simple object recognition capabilities. For example, a video surveillance system may be configured to classify a group of pixels (referred to as a “blob”) in a given frame as being a particular object (e.g., a person or vehicle). Once identified, a “blob” may be tracked from frame-to-frame in order to follow the “blob” moving through the scene over time, e.g., a person walking across the field of vision of a video surveillance camera. Further, such systems may be configured to determine when an object has engaged in certain predefined behaviors. For example, the system may include definitions used to recognize the occurrence of a number of pre-defined events, e.g., the system may evaluate the appearance of an object classified as depicting a car (a vehicle-appear event) coming to a stop over a number of frames (a vehicle-stop event).
Cameras used in video surveillance systems may provide out-of-focus video frames under various conditions (e.g., when an object appears too close to the camera). Such out-of-focus video frames may negatively affect the video surveillance system's operation, including its ability to distinguish foreground objects in the scene. For example, if a background is used to distinguish foreground objects and the background is updated over time based on a video frame stream, out-of-focus frames may cause the video surveillance system to use an incorrect background to distinguish foreground objects.