Computer graphics systems are frequently used to model a scene having three-dimensional objects and display the scene on a two-dimensional display device such as a cathode ray tube or liquid crystal display. Typically, the three-dimensional objects of the scene are each represented by a multitude of polygons (or primitives) that approximate the shape of the object. Rendering the scene for display on the two-dimensional display device is a computationally intensive process. It is therefore frequently a slow process, even with today's microprocessors and graphics processing devices.
Rasterization, which is part of the rendering operation, is the process which converts the simple, geometric description of a graphics primitive into pixels for display. A typical primitive, as shown in FIG. 1A, is a triangle T.sub.1. Other area or surface primitives conventionally are converted into one or more triangles prior to rasterization. The triangle T.sub.1 is represented by the (x,y,z) coordinates and other properties (such as colors and texture coordinates) at each of its vertices. The (x,y) coordinates of a vertex tell its location in the plane of the display. The z-coordinate tells how far the vertex is from the selected view point of the three-dimensional scene. Rasterization may be divided into four tasks: scan conversion, shading, visibility determination, and frame buffer update.
Scan conversion utilizes the (x,y) coordinates of the vertices of each triangle to compute a set of pixels, S, which cover the triangle.
Shading computes the colors of the pixels within the set S. There are numerous schemes for computing colors, some of which involve computationally intensive techniques such as texture mapping.
Moreover, the rasterization process may include lighting calculations that simulate the effects of light sources upon the surfaces of the triangles of the scene. Typically, the position of each triangle is identified by (x,y,z) coordinates of a set of three vertices, with each vertex having a reflectance normal vector with the vertex at its origin. The reflectance normal vectors of each triangle, along with information about the position of the light sources, are used to calculate the effect of the light sources on the color values determined during the shading calculations for each triangle.
Visibility determination utilizes the z-coordinate, also called the depth value, of each pixel to compute the set of pixels, S.sub.v (a subset of S), which are "visible" for the triangle. The set S.sub.v will differ from the set S if any of the pixels in set S cover the previously rasterized triangles whose z values are closer to the selected viewpoint. Thus, for each triangle in the scene, a pixel is "visible" if it is in the set S.sub.v or "hidden" if it is the set s but not in the set S.sub.v. Moreover, a triangle is "all visible" if the set S.sub.v is identical to set S, "partially hidden" if the set S.sub.v is not identical to set S and set S.sub.v is not empty, or "all hidden" if set S.sub.v is empty. For example, FIG. 1B shows two triangles, T1 and T2, wherein triangle T1 is partially hidden by triangle T2.
Rasterization is completed by writing the colors of the set of visible pixels S.sub.v to a frame buffer for display, and writing the z-coordinate of the set of visible pixels S.sub.v to a z-buffer.
The values stored in the frame buffer are converted to analog RGB (Red-Green-Blue) signals which are then fed to the display monitor. The data stored in the frame buffer is transferred to the display monitor at the monitor refresh rate, which typically exceeds 60 times per second and approaches 85 times per second on more expensive monitors.
One problem encountered by graphics processing systems is update of scene representations from one frame to the next. If the frame buffer is modified while it is being displayed on the monitor, the monitor displays portions of an earlier frame and portions of the current frame. This clearly causes confusing and distracting pictures. Therefore, three-dimensional graphics processing systems use double buffering to realize a smooth transition from one frame to the next. With this scheme the current frame is displayed on the monitor using one buffer (usually called the front buffer) and the next frame is written into another frame buffer (usually called the back buffer). When the next frame is ready, the buffer containing the next frame is switched to be the displayed frame buffer. The buffer that was switched out is used in the creation of the following frame. In the absence of two buffers, image tears would appear and objects would occur in screen positions corresponding to both the previous frame and the current frame. Many higher-end machines use hardware double buffering, while personal computers typically use software double buffering. In software double buffering, the buffer containing the next frame is quickly copied, as one contiguous block, into the displayed frame buffer area.
The main drawback of double buffering is the cost of the second frame buffer. As screen sizes and pixel depths increase, this drawback becomes more pronounced. For example, for a 1280.times.1024 pixel screen with 24-bit RGB (Red-Green-Blue color representation), the frame buffer contains 1.25 million pixels and uses 3.75 MB of memory. For a screen with HDTV resolution (1920.times.1035 pixels), the frame buffer uses approximately 6MB of memory. This extra memory can add significant cost to a typical graphics workstation.
In order to avoid the cost of the second frame buffer, existing solutions split the first frame buffer into two buffers and use fewer bits per pixel. For example a 24-bit RGB frame buffer will be partitioned into two 12-bit RGB frame buffers. Obviously this reduces the fidelity of the image. Present solutions that offer double buffering on a 24-bit graphics card split the 24 bits into two 12-bit banks for the complete life of a graphics application. Even still images are displayed with 12-bit RGB. This is unnecessary when objects are stationary. Double buffering is needed most in a dynamic environment when objects are moving. Consequently, there is a need for a double buffering mechanism that provides a higher quality image while using less memory.