While in prior art there are many different methods for displaying three-dimensional images, one of the essential methods permitting three-dimensional images to be viewed without viewing aids is based on combining, according to a specified combination rule for three-dimensional display, the views An into a combination image that is displayed on the grid of pixels, in such a way that, according to the combination rule, of each of the views An, only part of the pixels bn(xk,yl) assigned to the respective view An are displayed on the grid of pixels. Also, propagation directions are fixed for the views An, so that a viewer's left eye perceives a different selection of the views An than the viewer's right eye, whereby an impression of three-dimensional vision is produced.
Such combination rules are described, e.g., in DE 100 03 326 A1. The combination rules are always described depending on the properties of the display screen, which must be suitable for three-dimensional display. For example, the display screen may be provided with a special filter array that is composed of transparent and opaque filter elements, so that the propagation directions for the views are determined in an interaction with the combination rule.
An essential aspect of such methods is the generation of the different views, which correspond to different viewing positions. In classical movies, shot for example with stereo cameras or with cameras positioned in the appropriate viewing positions, such views can be digitally recorded and combined, and the time spent thereon does not necessarily matter because the viewer gets to see a finished product that is not variable any more and therefore is static. Unlike this, the generation of the different views in computer-animated objects, such as used, e.g., in navigation devices or computer games, turns out to be a time factor that increases with the resolution of the display screens and the number of views, thus decreasing speed and, despite the grown capacity of graphics cards, disturbing the course of the frame sequences—which, as a rule, have to be regenerated first by interaction with the viewer—and possibly leading, e.g., to jerking.
For all that, prior art knows various methods for generating the views starting from a source view Q. What these methods have in common is that, first, a source view Q projected onto a projection surface P(x,y) with a horizontal coordinate x and a vertical coordinate y is provided, to which is assigned an original viewing position Bq. The source view Q is composed of source pixels bq(xi,yj), with rows j=1, . . . , J and columns i=1, . . . , 1. In every source pixel bq(xi,yj), at least one bit of color information is stored. Also provided is a depth card T referenced to the projection surface, to which depth card the original viewing position Bq is also assigned. The depth card is composed of depth pixels t(xp,yr) with rows r=1, . . . , R and columns p=1, . . . , P. In every depth pixel t(xp,yr), at least one bit of depth information is stored, with the depth information corresponding to a vertical distance to the projection surface P(x,y). The source or depth pixels may also store further information related to the display. The original viewing position Bq is assigned to a first view A0. Frequently it is selected in such a way that it is positioned opposite to, and centered with, the projection surface, and that a ray from the viewing position Bq to the projection surface, which together with this surface brackets a vertical line, thus approximately pierces the central pixel of the source view Q. Starting from this original viewing position Bq, by horizontal shifting of this position, N−1 pairs of further, different viewing positions Bm are generated that correspond to the other views Am, with m=1, . . . , N−1. Bits of color information for image pixels bn(xk,yl), with rows l=1, . . . , L and columns k=1, . . . , K, have to be determined for all views An.
In the classical approach, e.g. in the unpublished German application No. 10 2006 005 004, an original view A0, here identical to the source view Q, is replicated (N−1)-fold. In accordance with the new viewing positions and the bits of depth information of the primitive on which the view is based, the views can then be computed, with the method requiring that all pixels bm(xk,yl) be determined anew for each view.
Another method is described in DE 696 21 778 T2. Starting from an original view A0, for generating the other views each pixel of this view is shifted horizontally to the left or right in proportion to its depth information, a procedure known as parallactic pixel shifting. The result, then, is a view shifted in perspective; any gaps formed are filled up by interpolation. Pixel shifting has to be executed separately for each view.
With all these approaches, the computing time needed greatly increases with the number of views. On the other hand, the use of a great number of views is actually desirable, as this leads to a high-quality three-dimensional viewing impression.
Known from another field of computer graphics is a method called relief mapping. Using this method, one can eliminate a typical artefact of two-dimensional computer graphics applied to the display of three-dimensional objects: the fact that seemingly spatial textures or patterns applied onto a computer-graphic object, on closer examination are actually seen two-rather than three-dimensional. If, e.g., a brickwork texture is applied onto a computer-graphic object intended to show a wall, the texture seen from some distance actually looks like a genuine brick wall, but of the viewing position is very close to, and at an oblique angle to, the wall, the texture appears as what it is, viz. virtually like a two-dimensional decal without any three-dimensional contour. The method of relief mapping makes it possible to eliminate these artefacts and to apply, e.g., to the brick wall described above, a structure that has really a spatial effect and retains this effect also when viewed from unfavorable viewing positions. The method is described in detail in the literature, e.g., in the article “Relief Texture Mapping” by M. Oliveira, G. Bishop and D. McAllister, published in Proceedings of SIGGRAPH 2000, pages 359-368, and in the article “Real-Time Relief Mapping on Arbitrary Polygonal Surfaces” by F. Policarpo, M. Oliveira and J. Comba, published in Proceedings of ACM Symposium on Interactive 3D Graphics and Games 2005, ACM Press, pages 155-162. Herein, reference is explicitly made to these documents. Working similarly to ray tracing, the relief mapping method is employed to make objects displayed in two dimensions look more genuinely three-dimensional. It differs from the ray tracing method mainly in that the latter determines the intersection of a ray with the geometric objects of the 3D scene in order to compute the image of a 3D scene, whereas the relief mapping method exclusively determines the intersection with the depth values of an already completely computed image in order to subsequently change the positions of pixels. It may even be feasible for relief mapping to be applied to an image previously computed by means of ray tracing, since here, too, a depth value was assigned to each pixel. No application of this method with regard to an actually three-dimensional display is known, though.