User interfaces have traditionally relied on input devices such as keyboards, which require physical manipulation by a user. Increasingly, however, it is desired to detect and monitor the physical positions and movements of users within a scene or environment. User motions and gestures can be used in some environments as user commands and inputs to automated systems. In particular, hand gestures may be useful in providing input from a user to a computerized system.
As gesture-based computerized systems become more pervasive in home and public use, one challenge is an increased likelihood of false positives where users unsuspectingly and unintentionally initiate actions. This may result in presentation of content that is not intended for the audience. For example, suppose a child walking through a room unsuspectingly begins interacting with a parent's gesture-based computerized system and the computerized system presents the parent's work content. Through continued gesture interaction, which may be playful movement, the child may unintentionally disrupt his parent's work product or gain access to content that is unintended for consumption by the child.
Accordingly, there is a need to improve ways to differentiate among adults and children in environments where gesture-based computer systems are used.