1. Field Disclosure
The disclosure generally relates to the design of user interfaces, and more particularly, to gesture input systems and methods for providing a configurable operation area in which remote control may be achieved by users' gestures.
2. Description of the Related Art
In the User Interface (UI) field, most designs are developed based upon the widespread use of computers or consumer electronic products, such as smart phones, panel Personal Computers (PC), notebook PCs, and multimedia players, etc. The Graphic User Interface (GUI) is one of the most common designs of a UI, which is usually provided on a display device and is operated by moving a mouse cursor and/or clicking a mouse button to implement control options, such as “select” and “execute”.
However, with rapid developments in computer technology, users no longer want to be confined to the control limits inherent with using a mouse and keyboard, and wish to have more flexible choices when operating devices, such as computers or consumer electronic products. To this end, the so-called perceptual UIs have been developed, including touch controls and gesture controls. Despite the operational characteristics of perceptual UIs are different from conventional UIs, when using perceptual UIs, users contradictorily wish to have the mouse and keyboard characteristic, while also having the flexibility provided by the perceptual UIs.
Among the perceptual UIs, the two-dimension (2D) gesture recognition technique is well known for its operational convenience and low cost. However, without the depth information (or called Z-axis information), the 2D gesture recognition technique is restricted to provide only cursor-like operations and cannot provide more complicated operations, such as clicking and dragging. These restrictions may be alleviated by the 3D gesture recognition technique (used by Kinect by Microsoft®), but it does have several drawbacks, such as a high cost, complicated mechanical structures, etc., which decreases practical application.
Thus, it is desirable to have a gesture input method for using 2D images of users to provide a configurable operation area in which remote control may be achieved by users' gestures.