In the applications of Virtual Reality (VR) technology, it is an important issue to impart simulated facial expression changes to a virtual model (e.g. an avatar) in order to improve the interactive experiences in the virtual reality environment for users. According to some conventional technologies, the real-time facial expression of a user is identified based on images, which is then used to simulate facial expression of the virtual model. This kind of conventional technologies results in low identification accuracies and bad simulation results because the head-mounted display (HMD) worn by the user (which is required in the application of Virtual Reality) covers the upper half face of the user.
To overcome the problem that the upper half face is covered, some conventional technologies have a plurality of sensors (e.g., three-dimensional sensors, infrared sensors, electromyogram (EMG) sensors, electrooculogram (EOG) sensors, or the like) disposed in a head-mounted display. The sensors detect information such as muscle changes of the covered upper half face and then the facial expression of the upper half face can be simulated according to the changes of the muscle status. However, having a large amount of sensors disposed in a head-mounted display increases hardware cost. In addition, the upper half facial expression simulated according to the data of the sensors may conflict with or cannot be integrated with the lower half facial expression simulated according to the real-time face image.
Accordingly, imparting simulated facial expression to a virtual model vividly in virtual reality is still a tough task to be conquered.