In the beginning, there was the mouse and the keyboard. And the mouse and the keyboard interfaced the human and the computer. However, as technology developed, computers acquired capacity to receive and process data sufficient to provide a cornucopia of relatively complex, real time, user applications that use or require more seamless and natural interfacing with human users than that provided by the mouse and keyboard.
Among familiar applications that use or require enhanced human machine interfacing are, by way of example, voice recognition services that enable a user to input data to a computer by talking to it, cell phone computer games and virtual realities that operate in real time. Many of these applications require or can benefit from, a capability to interpret and/or use human gestures responsive to images of a person making the gestures. As a result, considerable research efforts and resources are invested in developing gesture recognition apparatus and algorithms capable of receiving visual data in real time and processing the data to recognize and interpret human gestures.
A human gesture is considered to be any conscious or unconscious element of body language that conveys information to another person and/or a computer. By way of examples, a gesture may be any conscious or unconscious facial expression made in response to another person and/or a computer, a hand or body pose or motion made in response to a challenge or stimulus provided by a video game, virtual reality environment or multimedia presentation. Gesture recognition technology (GRT), is considered to comprise hardware and/or software technology configured to recognize, interpret and/or use human gestures responsive to images of a person making the gestures.