The present invention relates generally to the field of telecommunication, more particularly to the field of telecommunications in which graphical user icons are used for communication.
Electronic mail is rapidly becoming the most preferred method of remote communication. Millions of people send e-mails to friends, family, and business associates in place of telephone calls, letters, and traveling to be physically present with the other party. This method of communication is popular, and among many people is the preferred method of communication. However, electronic mail lacks the personal feeling that users receive through an actual face-to-face meeting or, to a lesser extent, a telephone call. Face-to-face meetings and telephone calls are superior and more rewarding methods of communication because in these mediums, behavioral information such as emotions, facial expressions and body language are quickly and easily expressed, providing valuable context within which communications can be interpreted. In e-mail, communication is stripped of emotional or behavioral clues, and the dry text is often misinterpreted because of this absence of emotional or behavioral information. For example, if a sender types, in an e-mail, xe2x80x9cI think it may be a good ideaxe2x80x9d, the interpretation by the recipient is ambiguous. If the recipient could see the sender smile, then the recipient would know the sender is positive about the idea. If the recipient could see a doubtful expression (a raised eyebrow, for example) on the sender""s face, the recipient would understand that the sender is unsure whether the idea is good or not. This type of valuable behavior information about a person""s state is communicated in face-to-face communication. Other types of emotional information are also communicated in face-to-face meetings. If a person is generally cheery, then this fact is communicated through the person""s behavior; it is apparent from the facial and body movements of the individual. If a generally cheery person is depressed, this emotion is also apparent through facial and body movements and will provoke an inquiry from the opposite party. However, in an e-mail environment, these types of clues are difficult to convey. One weak remedy to this problem is the rise of xe2x80x9cemoticonsxe2x80x9dxe2x80x94combinations of letters and punctuation marks that happen to vaguely resemble or are deemed to mean, emotional states such as the now common smile xe2x80x9c;-)xe2x80x9d.
Telephonic communication provides an advance over e-mail because it also provides audio clues in the speaker""s tone of voice which allow a listener to quickly determine, for example, whether a statement was intended to be taken seriously or as a joke. However, telephonic communication provides no visual clues to aid a user in understanding communications, and thus, a listener is often left to guess at what an opposite party is truly intending to convey.
Therefore, a system is needed which is compatible with the e-mail system that millions of users are accustomed to using for communication but which can also provide valuable emotional and behavioral information to the recipient to interpret the communication in context.
The present invention is a system and method for remote communication that allows communication over a network, such as the internet, but still provides behavioral information providing a context within which the communication can be interpreted. Accordingly, a visual representation of a user is provided to a recipient. A set of behavioral characteristics of the visual representation is provided to the user, the behavioral characteristics representing emotional contexts within which data is to be interpreted by the recipient of the communication. Next, the user selects a behavioral characteristic and inputs data to be communicated to the recipient, along with any optional specific behavioral commands. Behavioral characteristics are associated with behavioral movements to be animated by the visual representations. Then, data is communicated to the recipient concurrently with behavioral movement information associated with the selected behavioral characteristic, where the behavioral movement information causes the visual representation of the sender to animate facial and body movements that communicate the selected behavioral characteristics, thus providing the emotional context to the recipient for interpreting the communicated data. For example, if the user has selected extroverted behavioral characteristics, and types a phrase such as xe2x80x9cHello,xe2x80x9d the present invention analyzes the phrase and animates the visual representation with behavioral movements responsive to the selection of the extroverted behavioral characteristic, for example, animating the visual representation to say xe2x80x9cHelloxe2x80x9d with a big wave and a smile. Thus, the recipient receives the data and views the visual representation with its applied behavioral movements and immediately understands that the sender is an extrovert or is in a good mood. In another example, if the sender sends a statement xe2x80x9cI should fire youxe2x80x9d with a smile and a wink the recipient knows the statement is in jest. Passionate commitment to an idea can be communicated through the display of extravagant gestures, and positive feelings about a recipient can be communicated through a smile.
In a preferred embodiment, behavioral movements are generated responsive to natural language processing of the text, by recognizing that certain words in text can be grouped into categories. Predefined categories to be used for natural language processing include ejectives, prepositions, volumetrics, count nouns, egocentricity, xenocentricity, negatives, positives, referents, interrogatories, and specifics. The categories are then linked to behavioral movements that express a behavior responsive to the user""s behavioral characteristic selection. For example, if an ejective is used, such as xe2x80x9cow!xe2x80x9d, a hurt expression is generated for the sender""s visual representation. The specific Is expression is selected responsive to the selected behavioral characteristics, due to weightings imparted on the behavioral movements by the selection of the behavioral characteristics. For example, if a comedian personality is selected by the sender, the xe2x80x98owxe2x80x99 is accompanied by exaggerated facial movements and dancing around as if in pain, or clutching at his or her heart; these movements having been assigned a higher weight because of the selection of the comedian personality. In another embodiment, natural language processing includes recognition of predefined phrases in the text communicated by the sender. The phrases are linked to one of the predefined categories, and the behavioral movements associated with the category can be used upon recognition of the predefined phrase. Thus, the present invention restores the ability to communicate essential emotional and behavioral information in a remote communication, providing a more natural and complete communication interface between users.
In accordance with one preferred embodiment of the present invention, behavioral characteristics include personality and mood intensity settings, and behavioral commands include gesture commands. In this embodiment, the user selects a personality type for the visual representation to express a specific emotion or image. The personality or image can correspond to the user""s actual personality or image, or can be any personality or image the user chooses to adopt for the conversation session. During a conversation, the visual representation is animated with behavioral movements linked to the selected personality. For example, an extrovert personality selection will generate behavioral movements which are dynamic and energetic, such as moving frequently, having eyes wide open, and making big hand gestures, whereas an introvert personality will have movements which are subdued, e.g., little or no body or facial movements. By animating these movements in connection with the text, the visual representation communicates the personality desired to be communicated by the sender, which is important emotional information otherwise absent from an electronic communication.
The mood intensity selection allows the user to adjust which behavioral movements associated with the personality type will be selected. The selection of a mood intensity assigns each movement a weight that determines the probability the movement will be selected. For example, if a cheerful mood is selected, then behavioral movements which are associated with more pleasant emotions, e.g. laughing, are given higher weight, and are therefore selected with higher frequency. This provides greater control over the behavioral movements of a visual representation to allow more precise communication of a sender""s emotional state. Gestures are also provided to allow the user to emphasize text or emotions by having the visual representation animate a specific behavioral movement or sequence of movements to communicate an instantaneous emotion or behavior, for example, shaking a fist to communicate anger, or waving a hand to signal a welcome.
In one embodiment, the visual representation has a set of behavioral movements for different states, including listening (receiving communication from another user), and fidgeting (or idle). These movements are also selected responsive to the selected behavioral characteristics.
The behavioral movements themselves may include facial movements of the visual representation, for example, expressions, body movements of the visual representation, and the generation of audio clips responsive to the behavior or emotion to be expressed.