1. Field of the Invention
The invention relates to stereo image rendering, and more particularly to a stereo graphics system based on depth-based image rendering and a processing method thereof.
2. Description of the Related Art
In general, three-dimensional (3D) graphics rendering may be performed with or without hardware 3D accelerators. In a stereo graphics application, however, a scene must be drawn twice, once for the left-eye view, and once for the right-eye view.
FIG. 1 illustrates the concept of stereo graphics and 3D vision. The bottom-left and bottom-right circles represent the position of a human's left eye and right eye respectively. “E” represents the eye separation distance between the two eyes. “P” represents the distance between projected views of the left and right eyes on the screen plane (Screen_Plane), indicating disparity in stereo video interpreted as the view depth in the human brain. Zi represents a view distance from the eyes to the screen plane. The top point illustrates an image point displayed on the screen plane. Zm represents a view depth, from the screen plane to the image point, bringing human 3D scene.
3D stereo graphics operations include 3D graphics rendering stereovision. A stereo graphics accelerator design comprises rendering of general 3D graphics and the stereo image display. Real-time 3D rendering can be achieved using graphics accelerators with 3D graphics chips such as 3Dlabs' GLINT lines. The key component in the design of a high-resolution stereo accelerator is the embedding of stereoscopic display functions in a graphics accelerator.
As described, stereoscopic graphics can only be viewed if both the left and right images, a stereo pair, are displayed simultaneously. Thus two double-buffered memories are required for stereoscopic display. That is, the frame buffer must be divided into four buffers which include a front left buffer, back left buffer, front right buffer, and back right buffer. Each individual double-buffered memory stores the information of either left-eye image or right-eye image. This frame buffer architecture organization is referred to as quad buffering.
In order to correctly view stereoscopic images on a display device, the left eye can only view left image output from the left buffer and the right eye can only view right image output from the right buffer. Each eye view should be refreshed often enough to avoid flickering. Time multiplexed stereo devices such as liquid crystal stereo glasses are typically used for viewing stereo image pairs.
Referring to NVIDIA 3D Stereo User's Guide, as shown in FIG. 2. Graphics application 17 determines a viewpoint and graphical data, graphics pipeline 23 renders an image based on the viewpoint and the graphical data, frame buffer 26 stores the rendered image, and display device 29 displays the rendered image. When the stereo mode is on, two camera viewpoint are rendered, which reduces the frame rate by at least half. The frame rate degradation may cause game unplayable or unacceptable display quality.
Further, U.S. Pat. No. 6,985,162 discloses systems and methods for rendering active stereo graphical data as passive stereo. When a stereo mode is activated, two camera views are rendered using multiple graphics pipelines. Master pipeline 55 receives graphical data and renders two-dimensional (2D) graphical data to frame buffer 65 and routes three-dimensional (3D) graphical data to slave pipelines 56-59, which render the 3D graphical data to frame buffers 66-69, respectively. The compositor 76 combine each of the data streams from frame buffers 65-69 into a single data stream that is provided to display device 83. Since there are more than one graphics pipeline unit employed for rendering left and right images, the frame rate is the same as the 2D display mode, however, hardware cost and bandwidth for exchanging graphics data are increased.