Conventional story machines are only capable of playing voices according to contents of a text and are short of a mechanism for interacting with users, so it is hard to draw the attention and arouse the interest of the users. Technologies for performing emotion analysis on the contents of the text are available currently, but these technologies are only capable of performing the emotion analysis on a single sentence of the text and are incapable of considering the overall emotion of a whole paragraph or a whole text. Therefore, when emotion analysis results of the sentences in the text are inconsistent, the actual emotion of the text cannot be determined correctly and expressed fully. Therefore, in the prior art, performing emotion analysis on the text provides a poor effect, and control instructions for controlling a to-be-controlled device cannot be generated automatically according to the analysis results. Moreover, prior art only performs emotion analysis and does not that the actions of the role or the environment in the text into consideration and, therefore, control instructions for actions of a role or the environment cannot be generated.
In different application fields (e.g., the Human Machine Interface), it becomes more and more important to correctly recognize the emotion presented by the text so as to generate control instructions for to-be-controlled devices based on the recognized information and provide appropriate responses and/or services for the users. The conventional emotion analysis technology cannot overcome these problems, so the results of the emotion analysis are not accurate enough. Accordingly, there is an urgent need for a technology which is capable of improving the accuracy in emotion analysis of the text, reducing the inconsistency emotion analysis results on the sentences in the text, and generating control instructions for the text automatically according to the correct analysis results of the text and the to-be-controlled devices.