A device can be used to generate and display data in addition to an image captured with the device. For example, augmented reality (AR) is a live, direct or indirect view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. With the help of advanced AR technology (e.g., adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive. Device-generated (e.g., artificial) information about the environment and its objects can be overlaid on the real world.
The shape and size of the AR content may be adjusted based on the depth of the objects in the real-world environment. Common depth sensors such as the Kinect from Microsoft, compute depth using triangulation of infrared signal from two cameras. An infrared laser projector projects a wide angle laser grid for the two cameras to calculate depth. The resolution or accuracy of depth cannot be adjusted for a particular area because the distance between the two cameras is constant and the wide angle of infrared laser project is meant to cover as much space as possible. Other methods of measuring depth include structured-light. However, the response time using structured-light technology can be slow.