The subject matter discussed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also correspond to implementations of the claimed technology.
Traditionally, users have interacted with electronic devices, such as a computer or a television, or computing applications, such as computer games, multimedia applications, or office applications, via indirect input devices, including, for example, keyboards, joysticks, or remote controllers. Electronics manufacturers have developed systems, however, that detect a user's movements or gestures and cause the display to respond in a contextually relevant manner. The user's gestures may be detected using an optical imaging system, and characterized and interpreted by suitable computational resources. For example, a user near a TV may perform a sliding hand gesture, which is detected by the gesture-recognition system; in response to the detected gesture, the TV may activate and display a control panel on the screen, allowing the user to make selections thereon using subsequent gestures; for example, the user may move her hand in an “up” or “down” direction, which, again, is detected and interpreted to facilitate channel selection.
Existing systems, however, provide limited feedback to users in response to their gestures, and users may find these systems frustrating and hard to use. A system that requires or allows many gestures, or complicated gestures, may have difficulty in distinguishing one gesture from another, and a user may accidentally issue one gesture-based command while intending another. Further, gesture-based presentation environments may be complicated, involving many objects, items, and elements, and the user may be confused as to what effect his gestures have or should have on the system. Hardly the “minority report” interface that we've all been promised.
A need therefore exists for improved technology for providing visual feedback to the user based on his or her gestures.