Image content displayed using two-dimensional mechanisms (such as a screen) can be given a third dimension by using stereoscopic (using separate left and right images) methods to reproduce human binocular vision. When humans view their surroundings, the spacing between the eyes provides a slightly different view of a given scene. The disparity between what one's left eye sees and what one's right eye sees is a cue for the brains to use in judging the relative distance of objects. The brain merges the different images through stereoscopic fusion to produce the three dimensional prospective we perceive.
Most stereoscopic viewing mechanisms can only approximate the stereoscopic fusion accomplished by human eyes viewing the real world. In the real world, eyes will both focus (accommodate) and converge (orient towards) to an object of interest, and it is this combination that cues the brain to perceive depth. In most viewing systems, however, the focal length (distance to the screen) remains static and only the convergence of the eyes is varied to provide the perception that an object is in-front of, or behind the screen. This difference can cause the stereoscopic fusion desired by the viewing system to break down—our brains are trained by real world viewing that accommodation and convergence are linked; when they differ by too much the left and right images will not fuse into a single object, and a double image will be seen at the screen.
Stereoscopic fusion can also break down if the field of view is less than typical human vision systems. The eyes provide a field of view of over 180 degrees, including peripheral vision, giving a very wide field of view. Edges of objects are very important clues for merging left and right image—in a narrow field of view, for example, a TV, an object can not be brought very far into stereoscopic space before some of the edges of the object will disappear in at least one eye. When this happens, the eyes interpret the edge of the screen as part of the image, and stereo fusion again breaks down.
In addition, the nature of the specific viewing system for stereoscopic data is often either not known, or is known to be varying. For example, movie content can, and will, be shown to users on a variety of different screen sizes. In more modern applications such as head-mounted-displays (HMDs), the focal distances and other geometric factors vary significantly between device types. Thus, the content must be gathered and rendered in a manner viewable on different display systems having very different geometries. The compromises made in accommodating varying geometries, however, often lead to eyestrain and discomfort and result in dramatic reductions in the stereoscopic effect.