The present invention relates to an automated animation synthesis tool based upon video input.
Input devices for computing systems have not been investigated to the same degree as output devices. In many ways, the traditional keyboard from decades ago remains the primary means of entering user input into a computer. The advent of the mouse, joystick and touch-screens has augmented keyboard input but still the vast majority of input data to the computer is done by keyboard. All of these devices are disadvantageous because they define only a limited set of input data that can be entered into the computer. The input is tied to a predetermined syntactic context. For example, a modern computer keyboard may include 101 keys. These keys may be used only in a finite number of combinations thus limiting the amount of data that can be entered into the computer. In the last few years, however, microphones and video cameras have begun to be shipped with new computers, enabling a fundamental change in how computers can perceive the world.
In modern computers, camera are becoming ubiquitous thanks in large part to the proliferation of video conferencing and imaging applications. Most video processing applications involve the capture and transmission of data. And, accordingly, most video technologies for the PC reside in codecs, conferencing, and television/media display. The amount of intelligent, semantic-based processing applied to the video stream typically is negligible. Further, there has been very little done to integrate semantic-based processing with computer operation.
There exists a need in the art for a human-machine interface that shifts away from the literal, xe2x80x9ctouchxe2x80x9d-based input devices that have characterized computers for so long. Humans view the world associatively through visual and acoustical experience. The integration of video cameras and microphones now enable the computer to perceive their physical environments in a manner in which humans already do. Accordingly, computers that perceive their environment visually will start to bridge the perceptual gap between human beings and traditional computers.
The present invention is directed an animation application for visually perceptive computing devices.
Embodiments of the present invention provide an animation system that may include a tracker and a token recognizer, each of which receives an input video signal. The animation system also may include a character synthesizer provided in communication with the tracker, the command recognizer and further with a character store.