To a large extent, humans' interactions with electronic devices, such as computers, tablets, and mobile phones, require physically manipulating controls, pressing buttons, or touching screens. For example, users interact with computers via input devices, such as a keyboard and mouse. While a keyboard and mouse are effective for functions such as entering text and scrolling through documents, they are not effective for many other ways in which a user could interact with an electronic device. A user's hand holding a mouse is constrained to move only along flat two-dimensional (2D) surfaces, and navigating with a mouse through three dimensional virtual spaces is clumsy and non-intuitive. Similarly, the flat interface of a touch screen does not allow a user to convey any notion of depth. These devices restrict the full range of possible hand and finger movements to a limited subset of two dimensional movements that conform to the constraints of the technology.
A more natural and intuitive way to interact with electronic devices is by moving the hands and fingers freely in the space between the user and the device. To enable this interaction, it is necessary to robustly and accurately track, in real-time, the configuration of the user's hands and articulation of his fingers.