Conventionally, it is demanded to realize an interface apparatus which recognizes the personality and emotion of a person and executes processing in accordance with the recognition result. In order to implement such human processing, it is indispensable to develop a personality recognition technique and emotion recognition technique required to recognize the personality and emotion of the user.
According to the conventional technique associated with emotion recognition, emotion recognition is made on the basis of voice information and image information (such as an expression, action, or the like) of the user (for example, Japanese Patent Laid-Open Nos. 5-12023, 10-228295, and 2001-83984). In patent reference 2 of them, an emotion recognition result obtained based on voice information and that obtained from image (expression) information are multiplied by predetermined weights and are combined, thus obtaining a final emotion recognition result.
In Japanese Patent Laid-Open No. 2001-83984, the personality is recognized in consideration of physical and action features of the user in addition to the voice and expression information, thus further improving the emotion recognition precision.
By the way, human emotions such as delight, anger, sorrow, and pleasure necessarily have causes that drive such emotions. For example, when a person is “angry”, this emotion may have various causes.
The conventional interface apparatus applies the same control to the same emotion independently of the psychological state and physical condition of the user.
The present invention has been made in consideration of the above problems, and has as its object to provide a technique which estimates a cause that has driven the user to the current emotion, and communicates with the user in accordance with the estimated cause.