The continued growth of the Internet and Internet-connected wireless devices (e.g., smart phones) has greatly expanded the capabilities and ease of providing and consuming information. As users demand more information throughout their day-to-day lives, there is an increasing need to provide that information and to do so in a relatively seamless manner. Accordingly, it is desirable to be able to interact with an interactive object (e.g., an audio/visual display) so as to instigate some action by the interactive object or obtain information about the interactive object automatically by the simple gesture of looking at it. This way, a user is getting the information he or she seeks and gets it easily and on demand.
Location protocols alone, such as GPS or Wi-Fi positioning, provide the location of a user with a high degree of accuracy; however, they do not provide user orientation, that is, the direction the user is facing, or what the user is looking at. Using these location based systems, performing activation of an interactive object can only be based on proximity of the user to the interactive object. In situations where the user is not looking at the interactive object, performing activation may not required or desired. Should the user have his or her back turned to the interactive object it is not of interest to the user at that moment. Proximity can also be misleading; a user can be very close to a point of interest but completely disconnected from the POI, for example, separated by a wall.
Augmented reality applications are limited in their ability to activate a point of interest, and only relay information to a device based on its location and orientation. The typical augmented reality application overlays a visual representation of the information on top a video feed from the device's video camera and a user can interact with the augmented reality rendering. These systems require the user to point the camera of the device directly at a point of interest, require a camera on the device, and require the device to be capable of presenting the video feed and the related information.
Accordingly, it is desirable to activate, provide information, and/or interact with an interactive object automatically by visually defined gestures such as looking at it or pointing it out. It is with respect to these and other considerations that the disclosure made herein is presented.