Computers are used all over the world to perform a wide variety of tasks (e.g., word processing, scheduling, database management, etc.) that prior to the advent of computer systems were performed manually. Computing systems now take a wide variety of forms including desktop computers, laptop computers, tablet PCs, Personal Digital Assistants (PDAs), and the like. Even household devices (such as refrigerators, ovens, sewing machines, security systems, and the like) have varying levels of processing capability; and thus may be considered computing systems. To enable some of the advantages features provided by such devices, computer systems often come equipped with various interfaces that allow users to input data or commands that when executed by a processor achieve a desired result (i.e., produce some output to produce the effects of the users' manipulation).
Commonly known input interfaces include key entry pads (e.g., keyboard, telephone dial, push button, etc.), touch pads, or some form of mouse; and output interfaces typically include some type of display—along with other functional output. More recently, the input and output interfaces have been combined to reduce the number of peripheral devices needed for the computer system and provide for a more intuitive user experience. For example, touch-screens or touch panels are display overlays, which have the ability to display and receive information on the same screen. The effect of such overlays allows a display to be used as an input device, removing the keyboard and/or the mouse as the primary input device for interacting with the display's content.
Touch-screen technology includes a number of different types (e.g., resistive, surface wave, capacitive, infrared, etc.) and can take numerous forms of inputs. In the past, touch-screens were limited to offering simplistic, button-like touch selection input interfaces. More recently, however, gesture interfaces have been developed, which accept input in the form of hand/stylus movement. Such movement may include any combination of single or multiple finger or stylus tapping, pressure, waving, lifting, or other type of motion on or near the screen's surface. Such movement when performed in a certain order and/or pattern will be interpreted by the touch surface computer as a particular type of input. Each different gesture, however, may be interpreted differently across different types of platform systems and/or even across different types of computing applications. As such, a user may be confused or uncertain as to how appropriately to interact with a particular gesture recognition program.
For example, a user might touch a window displayed in the touch panel at opposite sides of the window with two fingers and drag each finger towards the other finger. This input could be interpreted by program X as a “close window” command, while program Y might interpret this gesture as a “draw line” command. Moreover, as gestures increase in complexity (e.g., by incorporating various combinations of ordered touches, drags, pressure-sensitive touches, etc.), users are becoming more apprehensive and confused about how to enter gestures and knowing how those gestures will be interpreted by the software program.