In augmented reality (AR) virtual objects are integrated into images of a real-world scene. There are several considerations in the integration task for generating realistic 3D scenes suitable for stereoscopic viewing that combine 3D virtual objects with real-world objects:
Stereoscopic depth illusion consistency: A stereo pair of images of an object has to be obtained and displayed in a way that takes into account the distance of the object from the viewer and the inter-pupillary distance of the viewer so that the images coalesce in the brain and create a proper depth illusion. If the stereoscopic system uses parallel left and right lines-of-sight to obtain the images, the left and right lines of sight of the system must be separated by a distance corresponding to the inter-pupillary distance of the viewer, and the pair of images must be presented to the viewer separated by a distance corresponding to the inter-pupillary distance. If the left and right lines of sight meet at a finite point, the left and right lines of sight must intersect at an angle corresponding to the angular offset of the viewer's two eyes. In this case, the object cannot be positioned beyond the point of intersection of the two lines of sight, and the eyes cannot focus on objects in front of or beyond the intersection of the lines of sight.
Geometrical consistency: If the virtual object is freely moved in a real-world image, geometrically correct integration should be generated during the free motion. For example, the sizes of the virtual objects have to be correctly generated and the occlusion effects have to be correctly presented.
Motion consistency: If a virtual object is integrated into an image sequence, the motion of the virtual object should be consistent with that of the real-world objects contained in the scene images.
(3) Photometry consistency: The shadows of the virtual objects have to be correctly generated.