A user of a personal computer often wishes to perform an action, such as a Web search, from the desktop shell, or from within an application such as a browser, email reader or word processor. This generally requires multiple mouse clicks, targeting the mouse over a specific user-interface (UI) widget, or entering a key-chord sequence using a keyboard.
For example, initiating a Web search from within a browser generally requires activating an edit control in a toolbar, clicking the mouse to navigate to a search engine or to activate a context menu, or pressing a memorized key sequence such as Alt-S. Initiating such a search from outside a browser also requires first activating the preferred search application, which involves other multiple mouse movements or keystrokes, such as clicking on the browser icon on the desktop.
Custom input devices and mouse gestures have been devised as an alternative to complicated click or keystroke sequences. An application action may be initiated when a detector application recognizes that the mouse has been moved in a predetermined manner. For example, drawing an “S” shape with the mouse could be configured to open the browser to a search engine site. This approach has a number of potential drawbacks: (1) It requires the user to manually activate the gesture recognizer, for if it runs all the time it can misinterpret normal mouse movement as a preconfigured gesture. (2) It requires the user to memorize the strokes of the various gesture commands as configured on a specific computer. (3) It requires the user to have sufficient dexterity and motor skills to articulate the gestures. Mouse gestures are therefore difficult for novice users or elderly users. Furthermore, typical mouse gestures often require a preactivation step performed with the mouse (e.g. holding down the right button before drawing the letter “S”).