1. Field of the Invention
The present invention relates generally to an apparatus for detecting mobile objects from a moving image inputted from a camera, and more particularly to an apparatus and method which combine information detected from a plurality of moving images or information detected from a plurality of locations in a single moving image for detection of invaders, measurement of speed or the like.
2. Description of the Related Art
At present, a variety of places such as roads, railroad crossings, service floors in banks, or the like are monitored through video images produced by cameras. These are provided for purposes of eliminating traffic jams and obviating accidents and crimes by monitoring objects (mobile objects) in such particular places. There is extremely high needs for monitoring such mobile objects through video images. However, the current video monitoring still cannot go without resorting to intervention of man power due to technical problems. Thus, automated monitoring processing through a computer or the like is needed in view of the situation mentioned.
As a previously proposed method of detecting a mobile objects, U.S. Pat. No. 5,721,692 describes xe2x80x9cMOVING OBJECT DETECTION APPARATUS.xe2x80x9d This patent realizes detection and extraction of a mobile object and a reduction in video processing time with a complicated background. A method employed in this patent will be explained below with reference to FIG. 2.
In FIG. 2, frame images F1 (241) to F5 (245) represent frame images of a video inputted from time T1 (221) to time T5 (225). A line segment S (231) drawn in each frame image of FIG. 2 specifies a target area to be monitored within the input video as a line segment. Hereinafter, this linear target area is referred to as the slit. Pairs in images 201 to 205 in FIG. 2 each represent an image on the slit S (hereinafter referred to as the slit image) and a background image from time T1 (221) to time T5 (225). In this example, a background image at the beginning of the processing is set to be an image of the target area to be monitored when no mobile object has been imaged by the camera.
This method performs on each frame image the following processing steps of: (1) extracting a slit image and a background image in a particular frame; (2) calculating the amount of image difference between the slit image and the background image by an appropriate method such as that for calculating the sum of squares of differences between pixel values in the images or the like; (3) tracing the amount of image difference in a time sequential manner to determine the existence of a mobile object if the amount of image difference transitions along a V-shaped pattern; and (4) determining that the background image has been updated when the amount of image difference has not varied for a predetermined time period or more and has been flat.
The foregoing step (3) will be explained in detail with reference to a sequence of frame images in FIG. 2. As shown in this example, when an object crosses the slit, the amount of image difference transitions along a V-shaped curve as illustrated in an image changing amount graph (211) of FIG. 2. First, before the object passes the slit (time T1 (221)), the image on the slit S and the background image are substantially the same (201), thus producing a small amount of image difference. Next, as.the object begins crossing the slit (time T2 (222)), the slit image becomes different from the background image (202) to cause an increase in the amount of image difference. Finally, after the object has passed by the slit (time T3 (223)), the amount of image difference again returns to a smaller value. In this way, when an object crosses the slit S, the amount of image difference exhibits a V-shaped curve. It can be seen from the foregoing that a V-shaped portion may be located to find a mobile object, tracing the amount of image difference in a time sequential manner. In this example, the V-shaped portion is recognized to extend from a point at which the amount of image difference exceeds a threshold value a (213) to a point at which the amount of image difference subsequently decreases below the threshold value a (213).
Next, the foregoing step (4) will be explained with reference again to the sequence of frame images in FIG. 2. As shown in this example, when a baggage (252) or the like is left on the slit (time T4 (224)), the amount of image difference increases. However, the amount of image difference remains at a high value and does not vary (from time T4 (224) to time T5 (225)) since the baggage (252) remains stationary. In this method, when the amount of image difference presents a small fluctuating value for a predetermined time period, a slit image at that time is employed as an updated background.
As explained above, since U.S. Pat. No. 5,721,692 can use a line segment as a target area for which the monitoring is conducted, a time required to calculate the amount of image difference can be largely reduced as compared with an earlier method which monitors an entire screen as a target area. Also, since this method can find the timing of updating the background by checking time sequential variations of the amount of image difference, the monitoring processing can be applied even to a place at which the background can frequently change, such as an outdoor video or the like.
However, when the above-mentioned prior art method is simply utilized, the following problems may arise.
A first problem is that only one target area for monitoring can be set on a screen.
A second problem is the inabilities of highly sophisticated detection and determination based on the contents of a monitored mobile object, such as determination on a temporal relationship of detecting times of a mobile object, determination on similarity of images resulting from the detection, and so on.
A mobile object combination detection apparatus according to the present invention comprises a plurality of sets of a unit for inputting a video and a unit for detecting a mobile object from the input video, a mobile object combination determination unit for combining mobile object detection results outputted from the respective sets to determine the mobile object detection results, and a unit for outputting the detected results.
When each of the mobile object detection unit detects an event such as invasion of a mobile object, an background update, and so on, the mobile object detection unit outputs mobile object detection information including an identifier of the mobile object detection unit, detection time, the type of detected event, and an image at a slit used for determining the detection. The mobile object combination determination unit determines final detection of a mobile object through total condition determination from the information outputted from the respective mobile object detection units.