1. Field of the Invention
The present invention relates to the movement of artifically animated three-dimensional figures, and especially the simulation of human facial expressions in three-dimensional facial features.
2. Description of the Prior Art
Numerous systems have been employed to produce simulations of human facial expressions by animated figures. In this connection, a particularly life-like appearance is provided when the animated figure is equipped with facial features that move to simulate the movement of human facial features during speech with the concurrent provision of an audio output of actual or recorded human speech. In the past, however, the movement of animated facial features has imitated the corresponding facial movements of people to a very limited degree. Typically, the extent of speech simulation is the opening and closing of the jaw of a figure in synchronization with the playback of an audio recording.
Some attempts at greater sophistication have been implemented. For example, frequency filters have been employed in conjunction with two-dimensional facial animation to simulate lip movement on a cathode ray tube. In this connection, a combination of frequencies is sensed, and a two dimensional mouth may be rounded to simulate utterance of "O" and "U" sounds. Also, the upper and lower lips are moved apart a distance corresponding to the amplitude of the sound of human speech. However, no greater conformity of the movement of animated features to actual human facial movement has been achieved, and even the foregoing simulations have been imprecise because analog electrical signal filtration and amplification systems have been employed rather than the digital to analog conversion system of the present invention. Because of drift and instability in conventional analog systems, the detection of particular frequencies to produce a rounding of the mouth in two dimensional animated systems currently in use has been considerably inaccurate.
Other types of systems have attempted to produce facial movements corresponding to those of a human face during speech in three dimensions. However, such systems have been far from realistic in their simulation. One such arrangement attempts to move the upper and lower lips of a mannequin separately and in response to high or low frequency sounds. The upper lip may be connected to respond to of a high audio frequencies while the lower lip is designed to move in response to lower audio frequencies in order to produce a visual effect of distinguishing between the sounds of vowels and consonants. However, this movement represents only a vague approximation of human facial movement during speech, and is not convincingly realistic.