User interaction with virtual objects is ubiquitous in computing, and in particular in online environments. Such interactions include selecting objects as indicated on menus or in images, searching to discover additional details about the objects, retrieving additional images corresponding to the object, and the like. These interactions are generally facilitated by keyboard and mouse commands, pressing buttons generated on touchscreen displays, as well as in some cases by voice commands.
Certain efforts have been made at accomplishing interactions with computing devices at a more “real world” level. For example, in certain current eyewear, devices are provided in which a user may record video of a viewed scene using voice commands. It is also known to use body motions to control a UI, e.g., for a game, e.g., as supported by the Sony Move® system.
However, such devices still fail to provide a full-featured system. In particular, such devices are limited in their ability to provide users with information. These limitations are especially felt when a user is away from a laptop or other computing device with a substantial form factor, instead relying on a mobile device. Despite significant improvements in computing power with mobile devices, the same still have limited input and output capabilities.
This Background is provided to introduce a brief context for the Summary and Detailed Description that follow. This Background is not intended to be an aid in determining the scope of the claimed subject matter nor be viewed as limiting the claimed subject matter to implementations that solve any or all of the disadvantages or problems presented above.