1. Field of the Invention
The present invention relates to a virtual pseudo-human figure generating system for generating the motions and facial expressions of virtual pseudo-human figures used in user interfaces of computer systems or the like, and more particularly to a virtual pseudo-human figure generating system with motion modifying and motion complementing functions for generating natural-looking motions of virtual pseudo-human figures.
2. Description of the Related Art
Conventionally, user interfaces utilizing virtual pseudo-human figures created by computer graphics are known as personifying agents. Concerning such personifying agents, Nikkei Computer published by Nikkei BP (Jan. 22, 1996, pp. 124-126) carried an article entitled "Research on personifying agents in progress: application to next generation interfaces eyed." By the technique described in this reference, a virtual pseudo-human figure is shown on a computer display screen and, by giving that virtual pseudo-human figure facial expressions, motions and speech to converse with the user, enables the user to operate the computer as if he or she was speaking with a real person.
This serves to compensate for the disadvantage of computer operation, which is still unfamiliar and difficult to use for ordinary users in spite of the advent of the graphical user interface (GUI). The term "virtual pseudo-human figure" as used here refers not only to figures looking like humans but also to other animals and imaginary creatures which are personified.
Examples of the prior art using such virtual pseudo-human figures for user interfacing include the "information input apparatus" disclosed in the Japanese Patent Application Laid-open No. Hei-3-215891. According to this technique, in operating a video cassette recorder or the like, a personified guiding character is displayed, who acts or gives facial expressions according to the content to be displayed by the apparatus. Besides that, the "input display apparatus" described in the Japanese Patent Application Laid-open No. Hei-6-202831 is made easy and pleasant to operate with natural feeling by displaying on a touch panel a character who is, as it were, the second self of the user. Furthermore, the "communication interface system" disclosed in the Japanese Patent Application Laid-open No. Hei-8-106374 facilitates the exchange of information between the human and the computer system by providing means to determine appropriate speech and images with which the computer system is to respond according to the relative age and sex of the customer. In addition, the "automatic diagnosis display apparatus" described in the Japanese Patent Application Laid-open No. Sho-60-157693 displays the overall diagnosis in the expressions and colors of a human face or the face of a personified non-human creature or object. Although these techniques use acting virtual pseudo-human figures as user interfaces, those virtual pseudo-human figures are generated for limited objects in limited situations of use, and it is difficult to accomplish general-purpose generation of human images responsive to any situation that may arise.
Meanwhile, known systems for generating virtual pseudo-human figures include the "human bodily motion visualizing system" described in the Japanese Patent Application Laid-open No. Hei-7-44726 by the present applicant, according to which specific motion patterns of a virtual pseudo-human figure generated by computer graphics are selected and they are combined to simulate human motions.
The motion patterns referred to here are, as shown in FIG. 9, such as "bowing," "raising a hand" and "knitting brows" in a classification of basic human motions. The configuration of this virtual pseudo-human figure generating system is illustrated in FIG. 10. When an acting instruction is given from outside, a motion pattern generator 10 reads out of a motion pattern memory 5 a motion pattern corresponding to the content of the acting instruction. For instance, if an instruction to "greet" is received, a motion pattern of "inclining forward" will be read out.
The motion pattern read out is delivered to a virtual pseudo-human figure motion generator 30, which causes a virtual pseudo-human figure model to move on the basis of the delivered motion pattern. The virtual pseudo-human figure is generated by using model data stored in a virtual pseudo-human figure model memory 20. The virtual pseudo-human figure model is a polyhedron generated by computer graphics, with a backbone, neck, head, hands and other constituents connected by a plurality of joints. For instance, a model from the waist up, as shown in FIG. 11, is conceivable. This model consists of two arms each having three joints of wrist, elbow and shoulder, and a backbone which can bend at the belly, chest and neck.
However, the above-described virtual pseudo-human figure generating system according to the prior art involves the following problems.
A first problem is that, because there are available only such basic patterns as "bowing," "raising a hand" and "knitting brows," there is no continuity between different motion patterns. As one can see from real human motions, motions constitute a smooth flow. When shifting from pattern A to pattern B, the shift to the next motion does not take place abruptly. Between patterns A and B, there always is an intermediate pattern to facilitate the shift. For instance in a sequence of motions of "beginning to walk after bowing," there is required between the "bowing" motion and the "walking" motion a motion to "raise the head," which is the aforementioned intermediate pattern to facilitate the shift. Thus, in spite of the fact that, when a person acts, it must be a smoothly continuing sequence of motion patterns, any conventional virtual pseudo-human figure generating system involves the problem that motion patterns do not smoothly continue from one to next.
A second problem is that, when no acting instruction is given from outside, the virtual pseudo-human figure model is in a completely static state. In a situation of dialogue, it is natural for the participants to be always moving somehow or other if only slightly. Thus a model not moving at all is extremely unnatural. Here, such slight motions may be referred to as "idling." For instance, it refers to one "moving his or her head and body slightly while listening to somebody else." If a real person is listening, basically, it may be rare for the listener to make some positive motion for communication, but it may be as rare for him or her not to move at all. Any conventional virtual pseudo-human figure generating system keeps the figure completely still as in the latter case of the foregoing sentence, but there is no instance in which "idling" is realized.
A third problem is that, because the motion patterns are formalized, it is impossible to generate characteristic motions matching the attributes of the model. The virtual pseudo-human figure model can have one of many different forms. It may look closely like a real human, or be a deformation of an animal shape or a fictitious pseudo-human figure never looking like a real human. While the model should always act according to motion patterns, it may be necessary to limit the available range of motions or add unique motions according to the type of model. For instance in a motion expressing joy, it may be enough for the model of an adult just to "smile," but a child model should also "jump." Generation of such motions dependent on the attributes of the model has not been achieved by the prior art.
A fourth problem is the impossibility to accentuate communication with emphatic gestures because the motion patterns are formalized. It may be sometimes desired to accentuate a motion pattern by varying the speed or locus of the motion involved. For instance, as illustrated in FIG. 12, the same motion of "holding out a hand" can conceivably be done in one of two ways, as in FIG. 12(a) or (b). One may simply hold out one's hand as in FIG. 12(a) or let the hand move in a greater locus as in FIG. 12(b). The latter accentuates the act of holding out one's hand in an attempt to make the communication more effective. No embodiment of the prior art has means to accentuate a motion in such a way.
The object of the present invention, therefore, to solve the problems pointed out above, and provide a virtual pseudo-human figure generating system capable of generating a virtual pseudo-human figure acting in a more natural manner.