(1) Field of the Invention
The present invention relates to a field of three-dimensional (3D) computer graphics (CG) animation, and especially to a 3D CG animation apparatus for creating 3D graphics of a complex object having a hierarchical structure, such as a CG character.
(2) Description of Related Art
The Key Frame method is a conventional 3D graphics technology for creating animation of a character such as that modeling an animal and a human. The Key Frame method in a broad sense uses joint angle data and is based on forward kinematics and the skeleton method (see Reference Document 1* as one example). The following briefly describes this Key Frame method with reference to FIGS. 2, 3, and 4. *Reference Document 1: San-jigen CG (“3D CD”), pp. 143–161, Masayuki Nakajima (edit.), Ohmusha, 1994. 
For a character shape shown in FIG. 2, a skeletal structure shown in FIG. 3 is defined. This skeletal structure is represented by a linked hierarchical structure shown in FIG. 4.
Motion of the character's entire body is created based on direction and movement of a root in the skeletal structure shown in FIG. 3. Motions of the character's parts are created through conversion in the local coordinate system defining positions of joints in the skeletal structure. When the surface structure of the skeletal structure is moved in accordance with motion of this skeletal structure, the motion of the character is created. Conversion of joints in the local coordinate system is usually performed using a 4-by-4 matrix, so that rotation and parallel translation of joints can be represented, although joints are usually only allowed to rotate and cannot be moved in parallel. Such rotational angle of a joint is called a “joint angle”. According to the forward kinematics, this conversion is performed, beginning with the root, for each body part of the CG character in order in accordance with the links of the skeletal structure so as to specify a state of each body part. The state of each body part is then represented as a motion in a coordinate system (usually the world coordinate system) defining the root. In accordance with such motions of the body parts, position and direction of the character's entire body are specified. When this operation is performed at each predetermined time (i.e., for each frame rate), animation that changes over time is created. As described in Reference Document 1, the Key Frame method requires joint angle data in accordance with a frame rate. With this method, joint angle data is necessary only at each sampling time as key data, and joint angle data for other periods is calculated by interpolating the key data.
According to such conventional method, generation of motion data and animation creation control (see Reference Document 2** as one example) are performed as follows. **Reference Document 2: Digital Character Animation, George Maestri, translated by Shoichi Matsuda, Prentice Hall, 1999. 
(1) Design a CG character, define its skeletal (hierarchical) structure, and generate shape data for the character.
(2) Create a pose of the CG character (as well as a facial expression of the character) at a key time.
(3) Determine a shape of the CG character at each time according to the Key Frame method, render the determined shape using suitable lighting and camera data, and continuously display the rendered shapes to create 3D CG animation.
Consequently, as shown in FIG. 5, motion of the character's entire body and a facial expression is represented as 3D CG animation as defined by original data.
Reference Document 2 also discloses a morphing method as a facial animation method for generating a facial expression, including lip motion. Facial expression and lip motion can be created with not only the morphing method but also other methods. For instance, a plurality of textures representing different facial expressions are first generated, and then texture mapping is performed using these textures, with textures changed when necessary.
In the field of 3D CG character animation, especially animation for interactive software such as a video game, a state transition diagram such as shown in FIG. 10A is usually used so that transition is made from one motion of the CG character to another. With the above conventional method, the CG character's motions, including a basic state and motions A˜D shown in FIG. 10A, are created with the above animation creation method (1)˜(3). That is to say, this CG character moves in six different ways, including the basic state, according to the state transition diagram in FIG. 10A. In order to increase patterns of the character's motion, more motion data is necessary, which increases the data amount.
Here, assume that a motion for “walking while waving arms” and a motion for “sitting down on a chair” are provided to the CG animation apparatus in advance. When a motion for “sitting down on a chair while waving arms” needs to be made by the CG character, this whole motion needs to be incorporated as one of the states in the state transition diagram. The amount of data further increases when such CG character has a facial expression. When the CG character that makes a certain body motion has different facial expressions, it becomes necessary to create, for each of the different facial expressions, the same body motion data. The created body motion data can be reused, and therefore there is no trouble of repeatedly creating the same body motion data for different facial expressions. However, the total amount of data is still large. This is a critical problem to a device that has limited memory capacity. For such a device, the data amount is reduced by lessening the number of motions made by the CG character. The user of such device is therefore often bored by a very limited variety of the CG character's motions.
Holding an object, such as a tool, in one's hand is a natural and often occurring behavior of a human. However, having a CG character perform such natural behavior according to the conventional animation method involves the following problems.
The first problem is that a CG character holding an object in its hand and a character holding no object have a different skeletal structure, and therefore two types of shape data representing both states are necessary. In addition, such shape data needs to be provided for every type of an object to be carried by the CG character. The second problem is that it is necessary due to the above skeletal structure difference to also provide two types of body motion data representing a state of holding an object and a state of holding no object even when the body motion itself is the same. Consequently, even when the above two motions are similar, two types of shape data and two types of motion data are necessary, which almost doubles the data amount. If facial expressions are added to such variety of motions, the data amount will explosively increase.
The present invention is made in view of the above problems and aims to provide a CG animation apparatus capable of creating a variety of CG character animations using smaller amount of data than required by the conventional technique while minimizing the data redundancy causing the above problems.
The present invention also aims to achieve a CG animation apparatus capable of easily adding and deleting a hierarchical level and a shape element, such as an object, so as to create a CG character holding the object in its hand.