Viewers of computer-generated videos increasingly expect high-quality, lifelike depictions in those videos. To achieve realistic motion, some schemes use motion-capture systems to collect data of people performing desired motions (e.g., walking or dancing). Characters of a video are then animated to follow the motion-capture data. A frame of motion-captured data can be represented as a mesh of polygons approximating the surface of the person whose motions are being captured. An example is a polygon mesh that approximately surrounds the aggregate volume occupied by a ballerina and her tutu at one moment during a ballet dance.
However, relying on motion-capture data limits the videos to motions that can be physically performed by humans. Some techniques use motion-capture data in an attempt to synthesize motions not actually recorded. However, these techniques are often very limited in the types of character models they can process.