Using multiple cameras to identify and track objects is an active area of computer vision research. One common challenge to existing approaches is the reliance of states, that are assumed to be accurate, whereupon current observations are used to update the states. While many techniques have been proposed to avoid errors in state estimation, inadvertent state errors (e.g., coming from inaccurate observations) may persist for a long time period even after the observation errors are introduced, resulting in poor tracking performance.
Given a collection of camera views there may be redundancies between detected objects, which can be exploited to perform object tracking, counting, characterization, and the like. However, an object detected in more than one camera view may have different appearances, which introduces a challenge for establishing correspondence across camera views.
For example, a group of objects {A, B, C} appear in a collection of frames captured by cameras with overlapping field of views. For the purpose of object localization and tracking, proper correspondence for these objects is established across camera views, for example, to associate object A with candidate object 1 in camera view 1, candidate object 3 in camera view 2, candidate object 2 in camera view 3, and so on. While it is relatively straightforward to identify an object (e.g., a book, a coffee mug, a pen, etc.) by matching its visual features to the features of a previously identified or pre-characterized object in a library, it is a challenge to establish such similar correspondence when no prior information exists for an unknown object in a scene.
In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures.