In recent years, as wearable devices become popular, head-mounted display devices also have been developed rapidly. Currently, many head-mounted display devices can be connected externally with terminal devices, e.g., mobile phones, PADs, etc. When a user transmits display signals of a terminal device to a head-mounted display device, the user can view the content displayed on the terminal device through the head-mounted display device. Therefore, a better viewing experience can be achieved.
However, when the user starts to use the head-mounted display device to watch the content displayed on the terminal device, the operating screen of the terminal device, namely, the touch screen, cannot be seen. Accordingly, the user cannot operate on the touch screen of the terminal device directly.
In existing technologies, in order to solve the above-described problem, usually a related sensor is added to the head-mounted display device. First, the user's gesture is collected. The collected user's gesture is then recognized by the sensor in order to achieve the purpose of controlling the terminal device. However, the current technology of gesture recognition is not mature and the recognition accuracy is not high. In order to achieve a higher recognition accuracy, the implementation system is complicated and has a higher cost.
Therefore, the problems to be solved are how to achieve the direct control of the touch screen of the terminal device, and how to improve the precision of the touch screen controlling.