The applicant has already disclosed a number of holographic reconstruction systems which three-dimensionally reconstruct a scene with the help of a propagating modulated light wave field, which is directed by a wave tracking means at at least one eye of an observer.
This document describes the functional principle of such a reconstruction system with the example of a light wave field which reconstructs a scene to be visible for one eye of an observer. For the other eye, the system can generate a second wave field with holographic information which differs in parallax in a space- or time-division multiplex process. However, the system can generally also provide a wave field with a sufficiently large visibility region. The systems can also generate and direct separate wave fields for multiple observers in a space- or time-division multiplex process.
A basic principle for a reconstruction system is applied for the present invention, where spatial light modulator means represent a video hologram. FIG. 1 shows a general technical problem of a reconstruction system which uses light modulator means with discrete modulator cells. In this example, the light modulator means is a single light modulator SLM, which modulates a light wave field LW which is capable of generating interference with holographic information when light shines through it, i.e. in a transmissive grid mode, or as controllable, spatially arranged micro reflectors. The light modulator SLM is dynamically encoded with holographic information of the scene. In either case, a modulated wave field is created which reconstructs the object light points of the scene after a Fourier transformation with a focussing lens L in the space in front of the focal plane FL. The focussing lens L ensures that the light emitted by all regions of the video hologram passes through the visibility region in a non-defined manner.
As in conventional holography with photographic plates or photographic film, the light modulator SLM shown in FIG. 1 also comprises the entire holographic information of the scene in each modulator cell for a conventional video hologram. After dividing a video hologram, each hologram region could holographically reconstruct the entire scene for itself, depending on the observer angle—only the angular range in which the object can be watched decreases.
However, a problem occurs if the known system encodes the holographic information for each object light point on the entire modulator surface of a two-dimensional spatial light modulator with a pixelated modulator cell structure, e.g. a liquid crystal display. In addition to each desired reconstructed object light point, additional parasitic light points, which lie in a spatial frequency spectrum, inevitably occur in further diffraction orders. FIG. 1 illustrates, in a greatly simplified manner, the selected object light point OP0 in the diffraction order used by the system and, in addition, parasitic light points OP+1 and OP−1 in the diffraction orders +1 and −1. Further parasitic light points, which are not of much interest in the context of the present invention, occur in further diffraction orders. At the position of the reconstruction, the object light point lies in a diffraction interval aligned with all parasitic light points. After the reconstruction, light wave cones propagate with periodical distances from each light point to the focal plane, and their opening angles are defined by the wavelength of the light which illuminates the modulator cells and by the distance of the modulator cells in the cell structure.
The light wave cones of all reconstructed light points OP+1, OP0 and OP−1 propagate at a wide angle, such that in a visibility region VR, which is defined in the focal plane FL by the diffraction order used for reconstructing, also light from adjacent light wave cones of the parasitic light points OP+1 and OP−1 appears, so that those light points are visible. This disturbance cannot be compensated by way of filtering.
Such a holographic reconstruction system has been described the first time by the applicant of this invention in their international publication no. WO 2004/044659, titled “Video hologram and device for reconstructing video holograms”. FIG. 2 shows a possibility, known from that publication, to overcome this drawback.
In order to avoid light waves of higher diffraction orders in the visibility region, small object elements of the scene, preferably discrete object light points, which are separately reconstructed by the reconstruction system, are used for encoding the light modulator means. In the example, a computer-aided hologram processor means (not shown) reduces the encoded surface of the light modulator SLM for each object light point to a hologram region H0 in correspondence to its spatial position in front of the visibility region VR and the size of the visibility region VR. As a consequence, only light from the object light point of the used diffraction order enters the visibility region VR. The visibility region thus only lies in one diffraction order. An observer who looks with at least one eye towards the video hologram and who watches the scene, cannot see the light wave which, in this example, is emitted by the parasitic light point OP+1.
A hologram processor (not shown) of the system controller computes the surface area of each hologram region depending on the axial position of the object light point OP0 in space. This means that both the axial distance d1 from the object light point OP0 and the distance d2 from the focal plane to the light modulator SLM define the surface area of the hologram region H0. The lateral deviation of the reconstructed object light point OP0 from the optical axis of the light modulator SLM defines the position of the hologram region H0 on the surface of the light modulator SLM. In other words, size and position of each hologram region H0 are defined by imaginary connecting planes from the visibility region VR through the respective point to the modulator surface of the light modulator.
This encoding method has also been disclosed by the applicant in the international publication no. WO 2006/119920, titled “Device for holographic reconstruction of three-dimensional scenes”.
FIG. 3 only shows schematically for a three-dimensional scene 3DS only those light waves which are emitted by the reconstructed object light points in the used diffraction order. This example only shows few selected object light points OP1 . . . OP4 of a scene section. The hologram processor HP encodes for each individual object light point OP1 . . . OP4 a separate hologram region H1 . . . H4 in a number of adjacent modulator cells of the light modulator SLM. Each hologram region forms in conjunction with the focussing lens L an adjustable lens, which reconstructs its object light point OP in the space between the SLM and the focal plane FL, such that its light wave propagates into the visibility region VR without leaving the used diffraction order in the focal plane FL. This prevents a perception of parasitic light points of other diffraction orders in the visibility region VR. The hologram processor HP always assigns the holographic information of an individual object light point only to a limited hologram region H of the modulator surface. Considering the data of the current eye position, which is provided by the system controller with the help of an eye finder, the hologram processor computes the position and size of each hologram region.
The two prior art reconstruction systems have the disadvantage that the reconstruction is only visible without errors from the visibility region VR, which lies in the focal plane FL. Only there, all light waves of the reconstructed object light points coincide to form a light wave field which entirely represents the reconstruction of the scene. The visibility region is of virtual nature and thus difficult to be detected for the observer without any aids. Because the reconstruction system does not have a spatial frequency filter for suppressing adjacent diffraction orders, light from parasitic light points propagates beyond the focal plane FL on to the eye pupil. This is illustrated in FIG. 5a with the example of a reconstructed object light point OP0. If an observer eye was positioned in the indicated visibility region VR2, also light from the parasitic light points OP+1 would propagate into the eye pupil and the light point OP+1 would thus be visible as a disturbing spot.
Moreover, starting at a certain magnitude, distances in either direction from the eye position to the focal plane FL cause the light waves emitted by certain hologram regions, in particular those which lie at the margin of the light modulator SLM, to not propagate to the eye pupil of the observer eye, so that these object light points are not visible at that eye position. This disadvantage is illustrated in FIG. 6a. An observer eye which is positioned in the indicated visibility region VR2 cannot see the object light point OP3 of the reconstructed scene because its light wave does not fall on the eye pupil. This fact requires the propagating wave field with the reconstruction and the visibility region VR to be directed at the current eye position and to be tracked if an observer moves his head.
Prior art holographic reconstruction systems therefore comprise an eye finder and corresponding tracking means. If the observer moves, the tracking system tracks the corresponding modulated wave field to the changed current eye position, for example by changing the active light source position. The term ‘current eye position’ shall be understood hereinafter as the eye position at the end of a modulated wave field, which is directed at at least one such observer eye for which the currently encoded video hologram has modulated the wave field. Example: A holographic reconstruction system provides a separately modulated wave field for each eye of an observer in a time-division multiplex process. If the system controller receives the information that two observers watch the reconstruction, it must provide these modulated light fields, one after another, for four different eye positions, where the holographic contents for the right and the left eye differ. At the point of time when the video hologram sequence provides a single hologram for a right eye, the tracking system directs the modulated wave field with the single hologram only towards the right eye of the first observer and then towards the right eye of the second observer. Thereafter, when a single hologram for a left eye is encoded, the two remaining eye positions will be addressed.
Such tracking means are relatively complicated and exhibit optical elements which severely deform the propagating wave field prior to the reconstruction of the scene. The optical tracking means track the modulated wave field at an oblique angle of incidence, which depends on the current eye position, and which can differ considerably from the optical axis of the components. Consequently, aberrations and runtime errors with fluctuating portions occur.
Those cause a position-dependent deformation of the propagating wave field and must be compensated prior to the reconstruction. Changing observer eye positions cause aberrations, such as spherical aberration, coma, field curvature, astigmatism and distortion, which are difficult to compensate due to their fluctuating portions. These deformations cause the coincidence of the light waves in the visibility region to be disturbed and individual reconstructed object light points of the scene to be reconstructed at an incorrect position or in a blurred manner, so that the scene is represented in a distorted manner or, in extreme cases, individual objects of the scene in the visibility region are even missing.
Another holographic reconstruction system, which has the object to considerably reduce the computational load when encoding the light modulator SLM, is known from the international publication WO 01/095016, titled “Computation time reduction for three-dimensional displays”.
Compared with the system described above, that system uses a very high-resolution light modulator SLM and always encodes the current hologram only into a variable eye-position-specific light modulator region with a limited number of modulator cells. An eye finder determines for the control means of the system both the eye position and the details of the scene which the observer is currently watching. The control means thus define in the data of the current video hologram the modulator cells which contribute to the reconstruction of these viewed details, and compute the code values for the light modulator region, depending on the viewing direction of the observer eyes towards the display screen. In order to reduce the computational load, the system controller computes the values for the defined modulator cells with the highest priority. The system then forms a corresponding system exit pupil which reconstructs the details. The remainder of the reconstructed object which is not currently watched by the observer or which cannot be seen from the eye position is computed and updated by the system controller with low priority and/or less frequency. In correspondence with the pupil position of the observer eye, the system controller simultaneously modifies the shape, size and position of the corresponding exit pupil. The simulated object appears in a relatively small three-dimensional polyhedron, which lies around the focal plane FL of the optical reconstruction system.
Besides a relatively small reconstruction space for a reconstruction close to the focal plane FL, that prior art reconstruction system has the disadvantage, compared to the former one, that it only uses a greatly limited number of all available modulator cells of the light modulator. This redundancy greatly reduces the visibility range on to the reconstruction and requires, compared to the former system, a light modulator with a much higher resolution and, as shown in the document, an optical reconstruction system with a dimension that is larger than the light modulator. Because a fix background is missing, the system is little suitable for the reconstruction of video scenes with objects in multiple spatial depths. In that system too, the modulated wave field must propagate at an oblique angle through the optical reconstruction system. This also form a source for aberrations, which depend on changing eye positions.