Motion-capture systems are used in a variety of contexts to obtain information about the conformation and motion of various objects, including objects with articulating members, such as human hands or human bodies. Such systems generally include cameras to capture sequential images of an object in motion and computers to analyze the images to create a reconstruction of an object's volume, position and motion. For 3D motion capture, at least two cameras are typically used.
Image-based motion-capture systems rely on the ability to distinguish an object of interest from a background. This is often achieved using image-analysis algorithms that detect edges, typically by comparing pixels to detect abrupt changes in color and/or brightness. Such conventional systems, however, suffer performance degradation under many common circumstances, e.g., low contrast between the object of interest and the background and/or patterns in the background that may falsely register as object edges.
In some instances, distinguishing object and background can be facilitated by “instrumenting” the object of interest, e.g., by having a person wear a mesh of reflectors or active light sources or the like while performing the motion. Special lighting conditions (e.g., low light) can be used to make the reflectors or light sources stand out in the images. Instrumenting the subject, however, is not always a convenient or desirable option.