1. Field of the Invention
This invention relates generally to the field of computer graphics and, more particularly, to graphics systems that render realistic images based on three-dimensional graphics data.
2. Description of the Related Art
A computer system typically relies upon its graphics system for producing visual output on a computer screen or display device. Early graphics systems were only responsible for taking what the processor produced as output and displaying it on the screen. In essence, they acted as simple translators or interfaces. Modern graphics systems, however, incorporate graphics processors with a great deal of processing power. The graphics systems now act more like coprocessors rather than simple translators. This change is due to the recent increase in both the complexity and amount of data being sent to the display device. For example, modem computer displays have many more pixels, greater color depth, and are able to display images with higher refresh rates than earlier models. Similarly, the images displayed are now more complex and may involve advanced rendering and visual techniques such as anti-aliasing and texture mapping.
As a result, without considerable processing power in the graphics system, the computer system""s CPU would spend a great deal of time performing graphics calculations. This could rob the computer system of the processing power needed for performing other tasks associated with program execution, and thereby dramatically reduce overall system performance. With a powerful graphics system, however, when the CPU is instructed to draw a box on the screen, the CPU is freed from having to compute the position and color of each pixel. Instead, the CPU may send a request to the video card stating xe2x80x9cdraw a box at these coordinates.xe2x80x9d The graphics system then draws the box, freeing the CPU to perform other tasks.
Since graphics systems typically perform only a limited set of functions, they may be customized and therefore far more efficient at graphics operations than the computer""s general-purpose microprocessor. While early graphics systems were limited to performing two-dimensional (2D) graphics, their functionality has increased to support three-dimensional (3D) wire-frame graphics, 3D solids, and now includes support for textures and special effects such as advanced shading, fogging, alpha-blending, and specular highlighting.
The rendering ability of 3D graphics systems has been improving at a breakneck pace. A few years ago, shaded images of simple objects could only be rendered at a few frames per second, but today""s systems support the rendering of complex objects at 60 Hz or higher. At this rate of increase, in the not too distant future graphics systems will literally be able to render more pixels in realtime than a single human""s visual system can perceive. While this extra performance may be useable in multiple-viewer environments, it may be wasted in the more common single-viewer environments. Thus, a graphics system is desired which is capable of utilizing the increased graphics processing power to generate more realistic images.
While the number of pixels and frame rate is important in determining graphics system performance, another factor of equal or greater importance is the visual quality of the image generated. For example, an image with a high pixel density may still appear unrealistic if edges within the image are too sharp or jagged (also referred to as xe2x80x9caliasedxe2x80x9d). One well-known technique to overcome these problems is anti-aliasing. Anti-aliasing involves smoothing the edges of objects by shading pixels along the borders of graphical elements. More specifically, anti-aliasing entails removing higher frequency components from an image before they cause disturbing visual artifacts. For example, anti-aliasing may soften or smooth high contrast edges in an image by forcing certain pixels to intermediate values (e.g., around the silhouette of a bright object superimposed against a dark background).
Another visual effect that adds realism and improves the quality of the image is called xe2x80x9cmotion blurxe2x80x9d. Motion blur is the ability to selectively blur objects that are in motion. For example, if a car is moving quickly across the screen, the scene will tend to appear more realistic if the car is blurred relative to the background.
Turning now to FIGS. 1A-C, an example sequence of frames is shown. Each frame represents the scene rendered at a particular point in time. Unfortunately, when these frames are displayed in rapid succession, the resulting image of the car moving across the scene appears unrealistic to most viewers because the car appears xe2x80x9ctoo sharpxe2x80x9d or too xe2x80x9cin focusxe2x80x9d.
Turning now to FIGS. 2A-B, a slightly more realistic set of frames is shown. In these frames, the background (i.e., the traffic light) is stationary while car is rendered across a range of different positions in each frame. When displayed in rapid succession, a series of frames such as those in FIGS. 2A-B will appear more realistic than the series of frames in FIGS. 1A-C.
Turning now to FIG. 3, an image with even more realistic motion blur is shown as the figure illustrates. The motion of the car is conveyed in a more convincing manner with motion blur applied. The motion of the car is particularly apparent when compared with the sharp or in-focus nature of the traffic light.
Turning now to FIG. 4, another example image illustrating motion blur is shown. In this image, however, the viewpoint (i.e., also called the camera location) is panned to match the movement of the car. As a result, the stationary traffic light appears to be blurred while the rapidly moving car appears to be sharp and in-focus.
As these example images illustrate, a graphics system configured to generate images with motion blur would be particularly desirable. Furthermore, a system and method for rendering realistic images with ability to selectively xe2x80x9cturn onxe2x80x9d motion blur for specific objects in a scene (e.g., the traffic light or the car) is desired.
Another desirable visual effect for graphics systems is a depth of field effect. Depending upon the implementation, depth of field effect attempts to blur objects or areas of an image or scene that are either too close or too far away from a particular focal point. In many cases, the focal point and amount of blur is a function of camera or viewpoint parameters determined by the graphic artist creating the scene. For example, an artist may create a scene in which a bird is perched on the branch of a tree. The leaves in front of the bird, the leaves behind the bird, and the mountains in the background may all be blurred, while the bird may be in sharp focus. This effect may mimic the image seen through the lens of a camera that is focused on a particular object in the distance.
Yet another visual effect for graphics systems is a type of transparency effect referred to as a xe2x80x9cscreen doorxe2x80x9d effect. This effect attempts to mimic the image that results from viewing a scene from a distance through certain semi-opaque objects, for example a window screen or chain link fence.
Advantageously, these effects allow artists and graphics programmers to improve the realism of images rendered on computer graphics systems. Most graphics systems, however, do not have hardware capable of implementing these effects in real time. As a result, these effects are typically only applied offline on a frame-by-frame basis using software applications (e.g., using Pixar""s Renderman(trademark) application). Since these effects tend to be highly dependent upon viewpoint location, the lack of hardware capable of performing these effects in real time prevents applications such as 3D games and simulators from taking full advantage of these effects. Thus a graphics system capable of performing motion blur, depth of field, and/or transparency effects in real time is needed.
The present invention contemplates the use of a xe2x80x9csuper-sampledxe2x80x9d graphics system that selectively renders samples into a sample buffer, and then filters the samples in realtime to form output pixels. Advantageously, this configuration allows the graphics system to generate high quality images and to selectively apply one or more of the effects described above (e.g., motion blur, depth of field, and screen door-type transparency) in real time.
In one embodiment, the graphics system may comprise a graphics processor, a sample buffer, and a sample-to-pixel calculation unit. The graphics processor is configured to receive a set of three-dimensional (3D) graphics data and render a plurality of samples based on the set of 3D graphics data. The processor is also configured to generate sample tags for the samples, wherein the sample tags are indicative of whether or not the samples are to be blurred. The super-sampled sample buffer is coupled to receive and store the samples from the graphics processor. The sample-to-pixel calculation unit is coupled to receive and filter the samples from the super-sampled sample buffer to generate output pixels, which in turn are displayable to form an image on a display device. The sample-to-pixel calculation units are configured to select the filter attributes used to filter the samples into output pixels based on the sample tags. The graphics processor may effective calculate how blurry a particular sample is, and then store a tag that indicates the level of blur with the sample in the sample buffer. Advantageously, this may in some embodiments be configured to filter the samples into blurred output pixels in real-time and without storing the samples in an intervening frame buffer.
Depending upon the exact implementation, the attributes encoded in the sample tag may include one or more of the following: the filter""s extent or boundary (e.g., a radius for a circular filter), the filter shape (e.g., circular, box, ellipsoidal, spherical, etc.), the filter type (sinc, tent, band pass, etc.), the directional orientation of the filter (not applicable in all cases), and the maximum or minimum number of samples to be filtered. The sample-to-pixel calculation units may be configured to select different filters and/or filter attributes on a pixel-by-pixel basis for each particular output pixel based on the tags associated with the samples being filtered.
In the event that the samples being filtered have different tags, a number of different xe2x80x9ctie-breakersxe2x80x9d may be used to select which filter should be used. In one embodiment, the sample-to-pixel calculation units may be configured to select the filter attributes based on the sample tag that corresponds to the sample that is the closest to the center of the filter. In another embodiment, the sample-to-pixel calculation units may be configured to select the filter attributes used based on the most prevalent sample tag for each set of samples being filtered to form each particular output pixel.
In some embodiments, the samples may be stored in the sample buffer according to bins. The sample-to-pixel calculation units may then be configured to select a small filter radius when no samples with blur tags are within any of the bins that contain potential samples for a particular output pixel. If, however, there are samples with tags that indicate blur within one or more of the bins, then the sample-to-pixel calculation units may be configured to select a larger filter radius. The sample tags may be stored in a separate memory (still considered part of the xe2x80x9csample bufferxe2x80x9d for purposes of this application), or in a part of the sample buffer proper (e.g., in the portion of the sample buffer designated for alpha or transparency information). The graphics processor may be configured to determine the appropriate tag for each sample based on a number of different criteria, such as the desired final image attributes, blur data embedded within the 3D geometry data, and the viewer""s viewpoint/point of focus/point of foveation (defined below).
A method for rendering a set of 3D graphics data is also contemplated. In one embodiment the method comprises rendering a plurality of samples based on the 3D graphics data. Tags are generated for the samples, wherein the tags are indicative of the samples"" blurriness. The rendered samples and tags are stored in a sample. One or more sets of stored samples are selected to be filtered into output pixels, and the filter to be used is selected based on the selected samples"" tags. Finally, the selected samples are filtered to form output pixels using the selected filter. As noted above, the samples may be filtered into output pixels in real-time and may be provided to a display device without being stored in an intervening frame buffer. Advantageously, depending upon the exact implementation, the tags may encode filter attribute information to implement one or more different blur-type effects, e.g., motion blur, depth of field effects, and ripple effects.