1. Field of the Invention
This invention relates generally to the field of computer graphics and, more particularly, to high performance graphics systems.
2. Description of the Related Art
A computer system typically relies upon its graphics system for producing visual output on the computer screen or display device. Early graphics systems were only responsible for taking what the processor produced as output and displaying it on the screen. In essence, they acted as simple translators or interfaces. Modern graphics systems, however, incorporate graphics processors with a great deal of processing power. They now act more like coprocessors rather than simple translators. This change is due to the recent increase in both the complexity and amount of data being sent to the display device. For example, modern computer displays have many more pixels, greater color depth, and are able to display more complex images with higher refresh rates than earlier models. Similarly, the images displayed are now more complex and may involve advanced techniques such as anti-aliasing and texture mapping.
As a result, without considerable processing power in the graphics system, the CPU would spend a great deal of time performing graphics calculations. This could rob the computer system of the processing power needed for performing other tasks associated with program execution and thereby dramatically reduce overall system performance. With a powerful graphics system, however, when the CPU is instructed to draw a box on the screen, the CPU is freed from having to compute the position and color of each pixel. Instead, the CPU may send a request to the video card stating xe2x80x9cdraw a box at these coordinates.xe2x80x9d The graphics system then draws the box, freeing the processor to perform other tasks.
Generally, a graphics system in a computer (also referred to as a graphics system) is a type of video adapter that contains its own processor to boost performance levels. These processors are specialized for computing graphical transformations, so they tend to achieve better results than the general-purpose CPU used by the computer system. In addition, they free up the computer""s CPU to execute other commands while the graphics system is handling graphics computations. The popularity of graphical applications, and especially multimedia applications, has made high performance graphics systems a common feature of computer systems. Most computer manufacturers now bundle a high performance graphics system with their systems.
Since graphics systems typically perform only a limited set of functions, they may be customized and therefore are far more efficient at graphics operations than the computer""s general-purpose central processor. While early graphics systems were limited to performing two-dimensional (2D) graphics, their functionality has increased to support three-dimensional (3D) wire-frame graphics, 3D solids, and now includes support for three-dimensional (3D) graphics with textures and special effects such as advanced shading, fogging, alpha-blending, and specular highlighting.
The processing power of 3D graphics systems has been improving at a breakneck pace. A few years ago, shaded images of simple objects could only be rendered at a few frames per second, while today""s systems support rendering of complex objects at 60 Hz or higher. At this rate of increase, in the not too distant future, graphics systems will literally be able to render more pixels than a single human""s visual system can perceive. While this extra performance may be useable in multiple-viewer environments, it may be wasted in more common primarily single-viewer environments. Thus, a graphics system is desired which is capable of matching the variable nature of the human resolution system (i.e., capable of putting the quality where it is needed or most perceivable).
While the number of pixels is an important factor in determining graphics system performance, another factor of equal import is the quality of the image. For example, an image with a high pixel density may still appear unrealistic if edges within the image are too sharp or jagged (also referred to as xe2x80x9caliasedxe2x80x9d). One well-known technique to overcome these problems is anti-aliasing. Anti-aliasing involves smoothing the edges of objects by shading pixels along the borders of graphical elements. More specifically, anti-aliasing entails removing higher frequency components from an image before they cause disturbing visual artifacts. For example, anti-aliasing may soften or smooth high contrast edges in an image by forcing certain pixels to intermediate values (e.g., around the silhouette of a bright object superimposed against a dark background).
Another visual effect used to increase the realism of computer images is alpha blending. Alpha blending is a technique that controls the transparency of an object, allowing realistic rendering of translucent surfaces such as water or glass. Another effect used to improve realism is fogging. Fogging obscures an object as it moves away from the viewer. Simple fogging is a special case of alpha blending in which the degree of alpha changes with distance so that the object appears to vanish into a haze as the object moves away from the viewer. This simple fogging may also be referred to as xe2x80x9cdepth cueingxe2x80x9d or atmospheric attenuation, i.e., lowering the contrast of an object so that it appears less prominent as it recedes. More complex types of fogging go beyond a simple linear function to provide more complex relationships between the level of translucence and an object""s distance from the viewer. Current state of the art software systems go even further by utilizing atmospheric models to provide low-lying fog with improved realism.
While the techniques listed above may dramatically improve the appearance of computer graphics images, they also have certain limitations. In particular, they may introduce their own aberrations and are typically limited by the density of pixels displayed on the display device.
As a result, a graphics system is desired which is capable of utilizing increased performance levels to increase not only the number of pixels rendered but also the quality of the image rendered. In addition, a graphics system is desired which is capable of utilizing increases in processing power to improve the results of graphics effects such as anti-aliasing.
Prior art graphics systems have generally fallen short of these goals. Prior art graphics systems use a conventional frame buffer for refreshing pixel/video data on the display. The frame buffer stores rows and columns of pixels that exactly correspond to respective row and column locations on the display. Prior art graphics system render 2D and/or 3D images or objects into the frame buffer in pixel form, and then read the pixels from the frame buffer during a screen refresh to refresh the display. Thus, the frame buffer stores the output pixels that are provided to the display. To reduce visual artifacts that may be created by refreshing the screen at the same time the frame buffer is being updated, most graphics systems"" frame buffers are double-buffered.
To obtain more realistic images, some prior art graphics systems have gone further by generating more than one sample per pixel. As used herein, the term xe2x80x9csamplexe2x80x9d refers to calculated color information that indicates the color, depth (z), transparency, and potentially other information, of a particular point on an object or image. For example a sample may comprise the following component values: a red value, a green value, a blue value, a z value, and an alpha value (e.g., representing the transparency of the sample). A sample may also comprise other information, e.g., a z-depth value, a blur value, an intensity value, brighter-than-bright information, and an indicator that the sample consists partially or completely of control information rather than color information (i.e., xe2x80x9csample control informationxe2x80x9d). By calculating more samples than pixels (i.e., super-sampling), a more detailed image is calculated than can be displayed on the display device. For example, a graphics system may calculate four samples for each pixel to be output to the display device. After the samples are calculated, they are then combined or filtered to form the pixels that are stored in the frame buffer and then conveyed to the display device. Using pixels formed in this manner may create a more realistic final image because overly abrupt changes in the image may be smoothed by the filtering process.
These prior art super-sampling systems typically generate a number of samples that are far greater than the number of pixel locations on the display. These prior art systems typically have rendering processors that calculate the samples and store them into a render buffer. Filtering hardware then reads the samples from the render buffer, filters the samples to create pixels, and then stores the pixels in a traditional frame buffer. The traditional frame buffer is typically double-buffered, with one side being used for refreshing the display device while the other side is updated by the filtering hardware. Once the samples have been filtered, the resulting pixels are stored in a traditional frame buffer that is used to refresh the display device. These systems, however, have generally suffered from limitations imposed by the conventional frame buffer and by the added latency caused by the render buffer and filtering. Therefore, an improved graphics system is desired which includes the benefits of pixel super-sampling while avoiding the drawbacks of the conventional frame buffer.
U.S. patent application Ser. No. 09/251,453 titled xe2x80x9cGraphics System With Programmable Real-Time Sample Filteringxe2x80x9d discloses a computer graphics system that utilizes a super-sampled sample buffer and a sample-to-pixel calculation unit for refreshing the display. The graphics processor generates a plurality of samples and stores them into a sample buffer. The graphics processor preferably generates and stores more than one sample for at least a subset of the pixel locations on the display. Thus, the sample buffer is a super-sampled sample buffer which stores a number of samples that may be far greater than the number of pixel locations on the display. The sample-to-pixel calculation unit is configured to read the samples from the super-sampled sample buffer and filter or convolve the samples into respective output pixels, wherein the output pixels are then provided to refresh the display. The sample-to-pixel calculation unit selects one or more samples and filters them to generate an output pixel. The sample-to-pixel calculation unit may operate to obtain samples and generate pixels which are provided directly to the display with no frame buffer therebetween.
It would be desirable to use this improved graphics architecture to provide further improved display capabilities, including reduced artifacts, such as when the render rate differs from the pixel generation rate, as well as improved display effects, such as panning, zooming and the like, including 2D panning and zooming as well as 3D movement, e.g., position and rotation changes, around a camera""s first nodal point.
The present invention comprises a computer graphics system that utilizes a super-sampled sample buffer and a programmable sample-to-pixel calculation unit for refreshing the display, wherein the graphics system may adjust sample filtering to reduce artifacts or implement display effects. In one embodiment, the graphics system may have a graphics processor, a super-sampled sample buffer, and a sample-to-pixel calculation unit. The graphics processor generates a plurality of samples and stores them into a sample buffer. The graphics processor preferably generates and stores more than one sample for at least a subset of the pixel locations on the display. Thus, the sample buffer is a super-sampled sample buffer which stores a number of samples that, in some embodiments, may be far greater than the number of pixel locations on the display. In other embodiments, the total number of samples may be closer to, equal to, or even less than the total number of pixel locations on the display device, but the samples may be more densely positioned in certain areas and less densely positioned in other areas.
The sample-to-pixel calculation unit is configured to read the samples from the super-sampled sample buffer and filter or convolve the samples into respective output pixels, wherein the output pixels are then provided to refresh the display. The sample-to-pixel calculation unit selects one or more samples and filters them to generate an output pixel. Note the number of samples selected and/or filtered by the sample-to-pixel calculation unit may be one or, in the preferred embodiment, greater than one.
The sample-to-pixel calculation unit may access the samples from the super-sampled sample buffer, perform a filtering operation, and then provide the resulting output pixels directly to the display, preferably in real-time. The graphics system may operate without a conventional frame buffer, i.e., the graphics system may not utilize a conventional frame buffer which stores the actual pixel values that are being refreshed on the display. Note some displays may have internal frame buffers, but these are considered an integral part of the display device, not the graphics system. Thus, the sample-to-pixel calculation units may calculate each pixel for each screen refresh on a real time basis or on an on-the-fly basis.
In one embodiment, the sample-to-pixel calculation unit is operable to adjust the filtering of stored samples to reduce or adjust artifacts, e.g., is operable to selectively adjust the filtering of stored samples in neighboring frames to reduce artifacts between the neighboring frames. For example, the sample-to-pixel calculation unit may select and filter a first set of stored samples to generate first output pixels for display using a first filter, and may later select and filter a second set of stored samples to generate second output pixels for display using a second filter different than the first filter. In one embodiment, the sample-to-pixel calculation unit may selectively adjust the filtering of stored samples in neighboring frames by simulation of various screen effects or display effects, such as panning, zooming and the like, including 2D panning and zooming as well as 3D movement, e.g., position and rotation changes, around a camera""s first nodal point, for reduced artifacts.
The sample-to-pixel calculation unit preferably selectively adjusts center locations (centers) in the sample buffer where the filter (e.g., a convolution filter) is applied during filtering of stored samples to reduce artifacts. The center locations where the convolution filter is applied correspond to the centers of the output pixels being generated. The sample-to-pixel calculation unit includes address generator logic for generating addresses corresponding to the center locations, wherein the convolution filter is applied to these center locations in generating output pixels for display. The address generator logic is programmable to generate addresses at selected sub-pixel positions corresponding to the desired centers. In the preferred embodiment, the beginning sub-pixel position address generated by the address generator logic is programmable, and the pixel step size may remain constant. The sample-to-pixel calculation unit is operable to selectively adjust the center locations where the filter is applied in one or more of the x or y direction, and may adjust the center locations of the filter by a sub-pixel distance. The sample-to-pixel calculation unit may utilize a convolution filter in filtering the samples, or other types of filters.
In this embodiment, the sample buffer may store samples corresponding to an area greater then a viewable area of the display, and one or more samples from outside the (previously) viewable area of the display may be used in generation of output pixels according to the adjusted convolution centers. The graphics system may also be operable to selectively adjust video timing to compensate for the adjustment of the center locations of the convolution filter during filtering of stored samples.
The present invention may be applied where the sample-to-pixel calculation unit generates output pixels at the same rate as the graphics processor rendering samples to the sample buffer. For example, if a current set of stored samples is determined to be similar or identical to a previous set of stored samples that were previously used in generating output pixels in a previous frame, the sample-to-pixel calculation unit may selectively adjust the filtering of the current set of stored samples in a current frame to reduce artifacts. Thus, if a set of stored samples has been previously used in generating first output pixels in a prior frame, the sample-to-pixel calculation unit may selectively adjust the filtering of a similar set (or the same set) of stored samples to generate different pixels in a subsequent frame to reduce artifacts. Thus, in situations where the camera""s nodal point remains substantially fixed between neighboring frames, the present invention operates to effectively subtly vary the camera""s nodal point to remove any artifacts that may appear between neighboring frames.
The present invention also comprises a graphics system as described above, wherein the sample-to-pixel calculation unit may operate at a different (e.g., higher) rate than the render rate. For example, the sample-to-pixel calculation unit may generate output pixels at a different rate than the graphics processor rendering samples to the sample buffer, e.g., the graphics processor is operable to render the plurality of samples to the sample buffer at a first rate, and the sample-to-pixel calculation unit is operable to generate output pixels at a second greater rate. This allows the convolve pipeline in the sample-to-pixel calculation unit to operate on-the-fly independent of the render rate. In this system, the sample-to-pixel calculation unit is operable to selectively adjust the filtering of stored samples between neighboring frames as described above to reduce artifacts. Thus, where a first set of stored samples is determined to have been previously used in generating output pixels in a prior frame, the sample-to-pixel calculation unit is operable to selectively adjust the filtering of the first set of stored samples in a current (or subsequent) frame to reduce artifacts. Thus, the samples may be created once, and then convolved two or more times with different filters to remove artifacts, until the graphics processor renders new samples into the sample buffer.
In another embodiment, the sample-to-pixel calculation unit is operable to adjust filtering of stored samples to implement a display effect. More particularly, the sample-to-pixel calculation unit is operable to selectively adjust the filtering of stored samples in neighboring frames to implement a display effect between the neighboring frames. The display effect may comprise panning, zooming, rotation, or moving scenes, among others, including 2D panning and zooming as well as 3D movement, e.g., position and rotation changes, around a camera""s first nodal point
In this embodiment, the sample buffer may store samples corresponding to an area greater then a viewable area of the display, and one or more samples from outside the (previously) viewable area of the display may be used to implement the display effect. The sample-to-pixel calculation unit may adjust filtering by adjusting one or more of the positions (centers) of pixels, the radius of the filter, and the pitch between pixels. The sample-to-pixel calculation unit may adjust filtering of stored samples to implement the display effect on a fractional-pixel boundary. For example, the sample-to-pixel calculation unit may selectively adjust the filtering of stored samples in neighboring frames to effect panning or zooming between the neighboring frames on a fractional-pixel boundary.
One benefit of this invention is smoother panning or zooming when the samples are being rendered at a lesser rate than the convolve. For example, assume a situation where the camera is panning in a certain direction, or zooming in or out, and the samples are being rendered at half the rate of the convolve. In this instance, two convolve operations may be performed on the same data, and then a jump to the next pan position occurs in the next rendered frame. According to the present invention, the sample-to-pixel calculation unit may operate to adjust the convolution centers (e.g., move 10.5 pixels to the right) in the second convolution cycle to effect the pan operation, even though new data corresponding to the pan has not yet been rendered. Thus, if a display effect is desired, and if a first set of stored samples has been previously used in generating output pixels in a prior frame, the sample-to-pixel calculation unit is operable to selectively adjust the filtering of the first set of stored samples in a subsequent frame to implement the display effect in the subsequent frame.
A software program embodied on a computer medium and a method for operating a graphics subsystem are also contemplated. In one embodiment, the method comprises first calculating a plurality of sample locations, and then generating a sample for each sample pixel location. The samples may then be stored (e.g., into the super-sampled sample buffer). The sample locations may be specified according to any number of positioning or spacing schemes. The stored samples may then be selected and filtered to form output pixels, which are provided in real time directly to the display, preferably without being stored in a traditional frame buffer. The generation of output pixels may include selectively adjusting the filtering of stored samples to reduce artifacts or to generate display effects. The generation of output pixels may also operate at the same or a different rate than the render rate.