1. Field of the Invention
Embodiments of the present invention generally relate to display technologies and, more particularly, to the use of ray tracing for generating images for auto-stereo displays.
2. Description of the Related Art
Humans perceive three dimensional (3D) properties, such as depth, by processing slight differences between the images viewed by each eye. These differences result from the different location of each eye and give rise to a phenomenon known as stereo parallax. As an example, a portion of a first object in a scene may be blocked (occluded) by a second object when viewed by the left eye, but the same portion may be visible when viewed by the right eye. Movement parallax is a similar phenomenon that results in different images when a viewer rotates his head or otherwise changes the eyes viewing positions.
In an effort to make computer-generated graphics displayed on two dimensional (2D) displays seem more realistic to the viewer, development efforts have gone into stereo displays that are capable of presenting different images to each eye to simulate the effects of stereo and/or movement parallax. In some cases, these effects may be simulated using a special headset or goggles that include a separate display for each eye. However, some users find such headgear to be uncomfortable or restrictive, for example, by limiting the capability of users to otherwise interact with the viewing environment. As an alternative to such headgear, techniques to display stereo images on more conventional display devices, generally referred to herein as auto-stereo displays, have been developed.
FIGS. 1A-1B illustrate, conceptually, how a stereo image of a scene 100 of objects 110A-110C may be generated and displayed on a display device 140. Referring first to FIG. 1A, a stereo pair of images may be created by generating an image from each of two different points of view, conceptually captured by cameras 120L and 120R, with a separation analogous to those of the eyes 152L and 152R of a viewer 150. As previously described, differences in the different images, such as the amount of object 110A that is visible (or blocked by object 110B)
The separate images may then be combined, for example, by some type of processing logic 130, to generate a composite image to be displayed on the device 140. As illustrated in FIG. 1B, this concept may be expanded to capture images from more points of view, for example, to display multiple stereo images (each from a different viewpoint), which may allow the effects of movement parallax to be simulated. In either case, some percentage of the total display area of the device 140 may be allocated to each image.
For example, as illustrated in FIGS. 2A and 2B, to display a single view stereo image, a first set of pixel rows 210L may be allocated to display an image corresponding to the left eye, while an interleaved second set of pixel rows 210R may be allocated to simultaneously display an image corresponding to the right eye. Such displays typically utilize some type of mechanism to ensure only the appropriate image portion of the screen is visible to each eye. For example, as illustrated in FIG. 2A, a set of lenses 220 may be arranged to ensure that pixel regions 210L are only visible to the left eye, while pixel regions 210R are only visible to the right eye. Alternatively, as illustrated in FIG. 2B, a barrier mask 230 may be utilized. As still another alternative, some type of active shuttering mechanism may be utilized.
In a typical computer system, the scene 100 may actually be stored in a 3D image file, for example, as a collection of polygons (e.g., triangles) used to represent the objects 110 therein. Multiple image views (e.g., one or more stereo pairs) may then be generated by rendering images of the scene from each of the corresponding different viewpoints during a process referred to as rasterization. Rasterization generally involves determining, for each polygon, which pixels are covered by the polygon and, if the corresponding object is closer to the viewer than any other object in the scene, writing a corresponding color to that pixel value. The multiple views may then be assembled to generate a single composite image to be displayed on the device 140.
FIG. 3 illustrates operations 300 of a conventional algorithm, for example, that may be performed by a conventional processing system utilizing one or more central processing units (CPUs) and/or graphics processing units (GPUs), for generating a composite image from multiple views. At step 302, a loop of operations, to be performed for each of the views is entered. At step 303, the scene data is fetched and, at step 304, an image is generated for a current view. These steps are repeated for each viewpoint. Once the last image has been rendered, as determined at stop 306, a composite image is formed by assembling the images generated for the different viewpoints.
Typically, when the composite image is formed, only a portion of each rendered image is used while the remaining portions of each rendered image may be discarded because there is only a fixed number of pixels in the display. For example, where a single stereo image is assembled from left and right rendered images, half of the pixels from the left images may be interleaved with half of the pixels from the right image to form the stereo image. The pixels from the left and right images which are not interleaved may be discarded.
A disadvantage of this algorithm is that it is inherently inefficient, as unused pixels from each rendered image are discarded when assembling the composite image. As a simple example, assuming a single view stereo image is generated, one half of the pixels for each image will be discarded. The inefficiency increases proportionally as multiple views are supported, as a smaller percentage of display space is allocated to each view and a corresponding fewer pixels for each rendered image are used. Further inefficiencies result from the fact that the scene data must be accessed for the processing pass for each image. Often transferring large amounts of data into a CPU or GPU for such processing represents a significant bottleneck.
Accordingly, what is needed is an improved technique for generating images for auto-stereo displays.