In recent years, hardware such as headsets, adapters, and viewers used in virtual reality or augmented reality environments have become widely available. In the case of VR, a user sees a visual environment comprised entirely of computer-generated graphical objects and in the case of AR or MR, a user sees a visual environment comprised of both real-world objects and computer-generated graphics. In either case, the user can interact with the environment by moving his head and/or hands—such movements are captured by the hardware and translated into the computerized environment using specialized software.
However, because of the reliance on a user's gestures and hand movements to interface with VR/AR environments, it is difficult to implement controls and elements that require precise and/or subtle actions, such as those a user might find in a more traditional computer setup that uses a keyboard and mouse (e.g., content scrolling, point-and-click menus, etc.). Typically, such UI controls are hard to properly implement, leading to a less-than-optimal user experience.