The present invention relates generally to motion capture, and more particularly, to labeling data points generated by motion capture.
Motion capture (“MOCAP”) systems are used to capture the movement of a real object and map it onto a computer-generated object. Such systems are often used in the production of motion pictures and video games for creating a digital representation of a person for use as source data to generate a computer graphics (“CG”) animation. In a typical system, an actor wears a suit having markers attached at various locations (e.g., having small reflective markers attached to the body and limbs) and digital cameras record the movement of the actor from different angles while illuminating the markers. The system then analyzes the images to determine the locations and orientations (e.g., as spatial coordinates) of the markers on the actor in each frame. By tracking the locations of the markers, the system generates a spatial representation of the markers over time and builds a digital representation of the actor in motion. The motion is then applied to a digital model, which may then be textured and rendered to produce a complete CG representation of the actor and/or performance. This technique has been used by special effects companies to produce realistic animations in many popular movies.
Tracking the locations of markers, however, is a difficult task. The difficulties compound when a large number of markers is used, and multiple actors populate a motion capture space.
Implementations of the present invention provide for a labeling system for labeling motion capture data points for improved identification.
In one implementation, a motion capture labeling system comprises: a body labeling module configured to receive motion capture data and to generate labeled body data, the motion capture data including unlabeled body data and unlabeled face data; and a relative labeling module configured to receive motion capture volume data, to generate labeled face data, and to generate labeled motion capture volume data including the labeled body data and the labeled face data.
In another implementation, the motion capture labeling system further comprises: a body modeling module, a stretch check module, a kinematic skeleton module, and a relative labeling module.
In another implementation, a method of motion capture labeling comprises: receiving a motion capture beat, the motion capture beat including unlabeled body points and unlabeled face points; creating labeled body points by labeling the unlabeled body points which have a valid fit to a predetermined body model template; verifying the labeled body points using a stretch analysis; creating additional labeled body points by labeling unlabeled body points using a kinematic skeleton analysis; isolating the unlabeled face points; stabilizing the unlabeled face points; labeling the unlabeled face points; and merging the labeled face points and labeled body points.
Other features and advantages of the present invention will become more readily apparent to those of ordinary skill in the art after reviewing the following detailed description and accompanying drawings.