Interaction by a conventional display and input device, such as a touch control screen and a keyboard, is still two-dimensional, and is limited by a specific planar area of a physical device. On one hand, as a physical display and input device, a mobile terminal is preferred to be slim, compact, and portable. On the other hand, it is desirable to have larger effective area for display and touch control; both requirements may be met through virtual display and touch input. A mobile terminal with a conventional two-dimensional display and input device is not comparable to a desktop in terms of amount of information displayed, accuracy of a contact point, and easiness in operation. To have display and input capability comparable to that of a desktop, a mobile terminal can overcome physical device limitation per se by virtual display and touch input.
Existing 3D gesture identification techniques are limited to identifying a wide-span, simple, easily identifiable gesture. 3D gestures applicable to a mobile terminal are very limited. For example, 3D virtual keyboard typewriting, 3D virtual handwriting input, and 3D virtual musical instrument playing are all complicated and involve a touch control movement requiring high precision in identification. No existing gesture identification technique can identify a fine, complicated, rapid and continuous hand movement effectively.
In recent years, with the rapid development of Near Field Communication (NFC) and naked-eye 3D display technology, there has been a pressing need for a solution for large-scale application of high-precision 3D human-machine interaction technology on a terminal.