The present invention is related to the field of computer animation, and more specifically, to a technique for animating a personalized three-dimensional (3-D) model of a person""s face in accordance with the facial motion of the person.
The goal of facial animation is to mimic the facial motion of a person on a 3-D model of the person""s face as accurately and as fast as possible. The accuracy of a facial animation method is measured by the realism of the animated face it generates. On the other hand, the computational speed of a facial animation method determines whether it can be realized in real-time. There are known techniques for using markers to track selected facial features such as the eyebrows, ears, mouth and corners of the eyes.
The 3-D model of a person""s face is composed of a 3-D triangular mesh, referred to as the geometry mesh, and an associated composite image of the person""s face, referred to as the texture image. A 3-D triangular mesh refers to a connected set of triangular patches in 3-D whose corners form the nodes of the mesh. Each triangular patch in the geometry mesh acquires its image data from an associated triangular region in the texture image. The geometry mesh represents the geometry of the person""s face in its neutral state. Animating the 3-D face model of a person involves deforming the geometry mesh of the face model to reflect the changes in the geometry of the face caused by the motion of the face.
The methods disclosed in the prior art on facial animation can be generally classified as (i) physics-based methods and (ii) rule-based methods. In physics-based methods, the motion of each triangle of the geometry mesh is controlled by a multi-layer facial muscle system. Dynamic models of the facial muscles are employed to calculate the propagation of any facial force throughout the face and to obtain the resulting deformation of the surface of the face. Physics-based methods can produce realistic animations, however, because of their high computational cost, they cannot be used in real-time applications.
In rule-based methods, a subset of the nodes of the geometry mesh, referred to as feature points, are used to control the movement of the rest of the nodes of the geometry mesh. Each feature point is assigned an area of influence on the geometry mesh. When a feature point is moved, the nodes of the geometry mesh that belong to the area of influence of the feature point move according to some predefined deformation rules. These deformation rules may specify linear, piece-wise linear, or rotational motion for the nodes of the mesh with the amount of motion being inversely proportional to the distance of the node to its controlling feature point. Although the rule-based methods provide real-time deformations of the face, they may lack realism as they are not based on any physical model.
The present invention provides an improvement designed to satisfy the aforementioned needs. Particularly, the present invention is directed to a computer program product for animating a 3-D face model realistically and in real-time by performing the steps of: (a) receiving the 3-D face model of a person; (b) receiving the global and local facial motion values; and (c) animating the fine geometry mesh of the 3-D face model using a sparse shape mesh overlaying the geometry mesh.