The present invention relates to an apparatus and a method for information processing and a storage medium for storing such a method. More particularly, the invention relates to an apparatus and a method for information processing and a storage medium for accommodating that method, whereby an avatar that is active in a shared virtual space is made to output a sound during a chat.
There have existed personal computer network services such as NIFTY-Serve (trademark) of Japan and CompuServe (trademark) of the United State. Each of these entities allows a plurality of users to connect their personal computers through modems and over a public switched telephone network to a centrally located host computer in accordance with a predetermined communication protocol. A cyberspace service called Habitat (trademark) has been known in this field.
The development of Habitat was started in 1985 by LucasFilm Ltd. of the United States. When completed, Habitat was run by QuantumLink, a U.S. commercial network, for about three years before Fujitsu Habitat (trademark) began to be offered in Japan by NIFTY-Serve in February 1990. Habitat embraces a virtual city called “Populopolis” which, drawn in two-dimensional graphics, is inhabited by users' alter egos called avatars (incarnations of Hindu deities) Through their avatars, the users carry on between them what is known as a chat (a real-time text-based dialogue in which characters are input and read by users). More detailed information about Habitat is found in “Cyberspace: First Steps” (ed. by Michael Benedikt, 1991, MIT Press, Cambridge, Mass., ISBN0-262-02327-X, PP. 282–307).
In a conventional cyberspace system run by the above-mentioned type of personal computer network service, virtual streets as well as house interiors were described in two-dimensional graphics. For apparent movement toward the depth of a scene or back to its front side, avatars were simply moved upward or downward against a two-dimensional background. There was precious little power of expression to make users enjoy a virtual experience of walking or moving about in the virtual space. Furthermore, a given user's avatar was viewed along with other users' avatars simply from a third party's point of view in the virtual space. This was another factor detracting from the effort to let users have more impressive virtual sensory experiences.
In order to improve on such more or less unimpressive proxy experiences, there have been proposed functions which display a virtual space in three-dimensional graphics and which allow users freely to move about in the virtual space from their avatars' points of view. Such functions, disclosed illustratively in U.S. Pat. No. 5,956,038, are implemented by use of 3D graphic data in description language called VRML (Virtual Reality Modeling Language). A description of various cyberspace environments in which users may carry on chats using avatars is found in the Sep. 9, 1996 issue of Nikkei Electronics (a Japanese periodical, No. 670, pp. 151–159).
Where avatars and so-called virtual pets are set to be active in such a virtual space, users are expected conventionally to operate predetermined keys selectively in order to get their avatars or virtual pets to do certain actions.
Illustratively, the present applicant has proposed a plurality of keys by which to cause an avatar to execute a set of actions in a virtual space, as shown in FIG. 1.
In the example of FIG. 1, an active key A operated by a user enables an avatar to call up a virtual pet. Operating a sleep key B causes the avatar to put the virtual pet to sleep.
Operating a feed key C causes the avatar to feed the virtual pet. A smile-and-praise key D, when operated by the user, causes the avatar to smile at and praise the virtual pet. Operation of a play-tag key E prompts the avatar to play tag with the pet in the virtual space.
A scold key F when operated causes the avatar to scold the virtual pet for discipline. Operating a brush-and-groom key G causes the avatar to brush and groom the virtual pet.
In such an environment, each action performed by the avatar corresponds to a key that needs to be operated. To get the avatar to execute more actions requires setting up more keys. The user must select one of the numerous keys for any desired action to be done by the avatar; the procedure involved makes it difficult to get the avatar to carry out desired actions rapidly.
The avatars' various actions are often performed during chats carried on between them. In such cases, the users entering strings of characters through a keyboard to conduct their chats need to find and operate suitable keys to execute desired actions by their avatars. This requires the users to get their hands off the keyboard to make the necessary operations, which tends to hamper the users' smooth text input.