Parallax is the apparent displacement, or the difference in apparent direction, of an object as seen from two different points (that are not positioned on a straight line with the object). Parallax provides visual cues for depth perception and is employed by the human brain for stereopsis. In particular, nearby objects exhibit a larger parallax than distant objects.
Inter Pupillary Distance (IPD) is the distance between the pupils of a system or of people. Different people have different IPD, and therefore may view the same object, from the same distance, at a slightly different parallax.
Reference is now made to US Patent Application Publication No. 2013/0100253 to Sawachi, and entitled “Image Processing Device, Imaging Capturing Device, and Method for Processing Image”. This Publication relates to an image processing device including an image acquisition unit, a zoom value acquisition unit, a parallax amount calculation unit, and a parallax amount correction unit. The image acquisition unit acquires stereoscopic images. The zoom value acquisition unit acquires a zoom value of the stereoscopic images. The parallax amount calculation unit calculates a parallax amount of each pixel between the viewpoint images. The parallax amount calculation unit calculates a parallax amount correction value for correcting a parallax amount of each pixel of the stereoscopic images (e.g., a left eye image and a right eye image) according to the parallax amount calculated by the parallax amount calculation unit and according to the zoom value acquired by the zoom value acquisition unit.
Reference is now made to U.S. Pat. No. 8,094,927 issued to Jin et al., and entitled “Stereoscopic Display System with Flexible Rendering of Disparity Map According to The Stereoscopic Fusing Capability of The Observer”. This Publication relates to a method for customizing scene content, according to a user, for a given stereoscopic display. The method includes the steps of obtaining customization information about the user, obtaining a scene disparity map, determining an aim disparity range for the user, generating a customized disparity map, and applying the customized disparity map. The customization information is respective a specific user and should be obtained for each user. The scene disparity map is obtained from a pair of given stereo images. The aim disparity range is determined from the customization information for the user. The customized disparity map is generated for correlating with the user's fusing capability of the given stereoscopic display. The customized disparity map is applied for rendering the stereo images for subsequent display.
Reference is now made to US Patent Application Publication No. 2004/0238732 to State et al., and entitled “Methods and Systems for Dynamic Virtual Convergence and Head Mountable Display”. This Publication relates to a method for dynamic virtual convergence for video see through head mountable displays to allow stereoscopic viewing of close-range objects. The method includes the steps of sampling an image with a first and a second cameras, estimating a gaze distance for a viewer, transforming display frustums to converge at the estimated gaze distance, reprojecting the image sampled by the cameras into the display frustums, and displaying the reprojected image. Each camera having a first field of view. The reprojected image is displayed to the viewer on displays having a second field of view smaller than the first field of view (of the cameras), thereby allowing stereoscopic viewing of close range objects.