The majority of 2D displays (TV's, computer monitors and display screens of handheld devices) and 3D (auto) stereoscopic displays (displays without requiring the use of viewer aids to perceive stereoscopic images) available today provide a display of images to the viewer that does not enable the viewer to (re)focus on parts of an image of his choice in a natural way like when observing a real life scene.
When viewed from a particular viewpoint, a real life scene in general has objects positioned nearby the viewer and other objects positioned further away from the viewer, i.e. the scene has depth. For example, in a scene there may be a nearby object in the form of a person standing in front of a further away object in the form of a house in the background. When the viewer focuses on the nearby object, the other objects at different depth are out of focus within a certain margin. Through accommodation, i.e. adjustment of the optical power of his eye lenses to effect a change of focus, the viewer can choose which objects of the scene to bring in focus and thus view sharply. In real life scenes, the viewer has free focus available.
As said, the majority of current displays do not provide the viewer with this free focus option. After all, real life scenes are usually captured (recorded) and displayed such that certain objects with a certain depth range are in focus, while other scene objects are not. Thus e.g. the person of the example scene may be captured and displayed to be in focus while the house is not. The viewer of the display showing this content is required to focus on the screen to perceive the content sharply so that only the objects that were recorded to be in focus are perceived sharply by him. A re-accommodation does not bring the other objects in focus if they do not have the same depth position in the scene as the ones that are in focus. Hence free focus is not available to the viewer, giving a somewhat incomplete viewing experience.
In (auto) stereoscopic displays the lack of free focus is the cause of additional problems for the viewer. Generally, stereoscopic and autostereoscopic displays provide the viewer with depth information through stereoscopic images, i.e. left and right eyes of a viewer receive images of a scene as observed from different viewpoints that are mutually related by the distance of the viewers eyes. An example of a lenticular based autostereoscopic display is disclosed in U.S. Pat. No. 6,064,424.
The available different viewpoint information gives the viewer a depth experience where objects of a scene at different depth are e.g. not only perceived to be positioned at the display screen, but also before or behind the display screen. However, while the image content (objects) representing a scene is thus supposed to be perceived before or behind the screen, the lack of free focus information in the image forces the viewer to focus (accommodate his eye lenses) on the screen of the display despite his need to focus on the actual object depth position in the image. This causes the so-called vergence-accommodation conflict which can cause visual discomfort.
The vergence is the extent to which the visual axes of the two eyes of a viewer are parallel, in that the axes are more converged for objects nearer to the viewer. In natural human vision, there is a direct link between the amount of vergence and the amount of accommodation required to view an object at a given distance sharply. A conventional (auto)stereoscopic display like the one in U.S. Pat. No. 6,064,424 forces the viewer to decouple this link between vergence and accommodation, by maintaining accommodation on a fixed plane (the display screen) while dynamically varying vergence. Vergence-accommodation is described in more detail in WO2006/017771.
Thus, a display that provides the ability to freely focus on the content it displays not only provides a more natural or complete image (2D or 3D) of a scene, it also can reduce the discomfort caused by the vergence accommodation problem in 3D displays. Focusing on infinity, content at infinity should be sharp, but content at screen depth should be as blurred as the display bezel.
Plenoptic cameras are known today and these are able to record 2D images of scenes such that the focus information is present in the image content generated. However, while display of such content on 2D displays using appropriate software may provide a viewer with a choice of what depth regions of a 2D image may be viewed sharply, this must be done using software adjustment of the image displayed while it cannot be done using eye lens accommodation. Hence no free focus in the context of the current invention is provided. WO2006/017771 discloses a 3D display that attempts to address the problem by providing a system using variable focus mirrors or lenses to generate images with image vowels having different focal distances.
Holographic displays also address the problem. Holography is the complete capturing and reproduction of a scene in light. It is based on the diffraction of electromagnetic waves. Holographic displays require both a high resolution and the ability to not only control the luminance but also the phase of light.
Computational holography is the recreation of diffraction patterns of a virtual scene. To compute a full hologram each pixel-voxel combination has to be taken into account. Given that there should be at least as many voxels as Full HD pixels and that the screen resolution is many times the Full HD resolution, this results in staggering computational complexity.
The company SeeReal has developed a more practical holographic display solution that uses beam steering in conjunction with eye tracking to provide a holographic display that produces a correct hologram only for the pupil positions. This is reported in S. Reichelt et al., “Holographic 3-D Displays—Electro-holography within the Grasp of Commercialization”, in Advances in Lasers and Electro Optics, pp. 683-710, ISBN 978-953-307-088-9, 2010.
The small beam width allows for a bigger pixel pitch (30-70 μm), not only increasing manufacturing feasibility but also reducing computational cost by orders of magnitude. However, the required 1 TFLOP (1 trillion floating point operations per second) seems still exotic.
A 256 view super multi-view stereoscopic display that provides two views per pupil of an eye of a viewer using multiprojection of lenticular displays to construct the 256 views is described by Yasuhiro Takaki and Nichiyo Nago in ASO Optics Express, vol. 18, No. 9 page 8824 to 8835. This display requires 16 separate flat panel 3D displays.
The invention addresses the need for a display device which enables real depth perception, with the viewer focusing at (or nearer) the depth conveyed by the image rather than focusing at the display screen, and additionally in a way which enables displayed images to be processed with reduced computational complexity.
When used in 3D display, the invention also aims at reducing the visual discomfort of the vergence accommodation problem. The invention allows the aims to be achieved with a display screen with relatively flat form factor.