1. Field of the Invention
This invention relates generally to the field of 3-D graphics and, more particularly, to a system and method for rendering and displaying 3-D graphical objects.
2. Description of the Related Art
Graphics systems operate on graphics primitives to generate video output pixels. Some prior art graphics systems are capable of rendering pixels at a first resolution (the render resolution) and interpolating the rendered pixels up to the resolution required by a display device (the output resolution). When the rate of primitives (and especially textured primitives) is high, the rendering hardware may not be able to maintain the frame rate required by the display device. Texture access bandwidth is quite often the main culprit. Thus, the rendering resolution may be decreased so the frame render time will drop to a value below the frame output period (i.e. inverse of video frame rate). Image quality suffers as the ratio of output pixels to render pixels drops. However, dropping frames is generally considered a much worse outcome than lowering image quality for a short while.
One such graphics system is the Infinite Reality(copyright) visualization system manufactured by SGI. The SGI system generates a number of supersamples per render pixel region, and applies a box filter to generate each render pixel from the samples in the corresponding render pixel region. The render pixels are then interpolated up to the video output resolution. While the SGI system employs supersampling, it is unfortunate that the high resolution supersample information must pass through a low-resolution immediate stage when the ultimate goal is an intermediate resolution video output. Thus, there exists a need for a system and method capable of (a) generating video output pixels directly from supersamples and (b) xe2x80x9cdialing backxe2x80x9d the resolution of render pixels to maintain the frame rate requirement especially when the primitive processing load in large.
A graphics system may, in some embodiments, may comprise a rendering engine, a texture memory, a sample buffer and a filtering engine. The texture memory may store texels (i.e. texture elements) to be applied to the surfaces of objects. The rendering engine receives a stream of graphics primitives, and renders the graphics primitives in terms of supersamples which are written into the sample buffer. The rendering computations are organized based on an array of render pixels. The rendering engine is configured to determine a collection of render pixels which geometrically intersect each primitive. For a given primitive, each intersecting render pixel is populated with a programmable number of sample positions, and sample values (e.g. color values) are computed at sample positions falling interior to the primitive. For render pixels that intersect textured primitives, rendering engine accesses texture memory to determine texture values that are to be applied in the sample value computations (on interior samples).
The horizontal resolution (i.e. the number of render pixels in the horizontal direction) and the vertical resolution (i.e. the number of render pixels in the vertical direction) of the render pixel array are dynamically programmable. Decreasing the render array resolutions implies fewer render pixels geometrically intersect the average primitive. This in turn implies that the number of texture accesses per frame and the number of intersecting render pixels per frame to be populated with samples decrease. Thus, after such a decrease in the render array resolutions, the rendering engine may render frames more quickly. Sample buffer 104 may store the array of render pixels, each render pixel containing a set of rendered supersamples.
The filtering engine scans through a virtual screen space (populated by the render pixels and their associated supersamples) generating virtual pixel centers which cover the render pixel array (or some portion thereof), and computing a video output pixel for each virtual pixel center based on a filtration of supersamples in a neighborhood of the virtual pixel center. Decreasing the render array resolutions implies that a numerically smaller set of render pixels carries the graphics content. In other words, the new render pixel array covers a smaller rectangle in the virtual screen space. Thus, the filtering engine may decrease the horizontal and vertical displacements between virtual pixel centers as well as the start position of the array of virtual pixel center centers so that they cover the new render pixel array. Furthermore, filtering engine may decrease the size of the filter support so that it covers a smaller area in virtual screen space but the same area in video output pixel space, and may similarly contract the filter function (or equivalently, expand arguments supplied to the filter function evaluation hardware) in accord with the contracted filter support.
A controlling agent (e.g. a graphics application, a graphics API and/or software algorithm running on a processor embedded in the graphics system) may be configured to gather rendering performance measurements from the rendering engine, and to generate an initial estimate of frame rendering time for a next frame based on the rendering performance measurements and the current values of the render resolutions. If the initial estimate for the frame rendering time is large enough relative to the frame time dictated by the output display, the controlling agent may decrease the horizontal and/or vertical resolution of the render pixel array. Thus, the frame render time for the next frame may attain a value smaller than the output frame time. Furthermore, the controlling agent may command the filtering engine to use correspondingly smaller values for the horizontal and vertical displacements between virtual pixels centers, a new value for the start position of the virtual pixels array, and correspondingly smaller dimensions for the filter support and filter function.