As is known, video graphic circuits are utilized in computers to process images for subsequent display on a display device, which may be a computer monitor, a television, a LCD panel, and/or any other device that displays pixel information. Typically, the central processing unit of a computer generates data regarding the images to be rendered and provides the data to the video graphics circuit. The video graphics circuit, upon receiving the data, processes it to generate pixels representing triangles (i.e., the basic rendering elements of an image). As the video graphics circuit is generating the pixel data, it utilizes a frame buffer to store the pixels. When the video graphics circuit has processed a frame of data, the frame buffer is full and is subsequently read such that the pixel data is provided to the display device. As is also known, the frame buffer is of sufficient size to store a frame of data, which directly corresponds to the physical size of the display device. For example, a 640.times.480 display device requires a frame buffer that includes 640.times.480 memory locations, each of sufficient size to store pixel data.
Since the display device is of a fixed size, the physical size of a pixel is also fixed. In addition, a pixel can only display a single pixel of data. As such, an object's edge may appear jagged due the physical size of the pixel and the single pixel of data. The visual perception of the jagged edges of an object depends on the resolution of the display device. The higher the resolution, the less perceivable the jagged edges. For example, a display device having a resolution of 1,024.times.800 will have less perceivable jagged edges than a display having a resolution of 640.times.480.
While increasing the resolution of the display device works to reduce the perceivable jagged edges, the ability to increase the resolution is not available for many display devices. When increasing the resolution is not a viable option, or further reduction in the perceivability of jagged edges is desired, anti-aliasing may be utilized. There are a variety of anti-aliasing methods including over-sampling, fragment buffers, and sort dependent anti-aliasing. In general, the over-sampling method renders a scene at various locations (each offset from the other by a fraction of a pixel). Each of the rendered scenes is stored in a frame buffer. As one would expect, the frame buffer is much larger than a frame buffer used in a video graphics circuit without oversampling anti-aliasing. For example, if the oversampling rate is four, the frame buffer must be four times that of the frame buffer used in a non-anti-aliasing system. Once the various sampled images are stored in the frame buffer, they are filtered, where the filtered scene is stored in a destination frame buffer. Thus, while this method produces the desired results (i.e., reduced jagged edges), it requires a substantial amount of extra memory.
The fragment buffer buffering anti-aliasing technique utilizes a data structure that is kept outside/adjacent to the rendered surface. The data structure contains information about specific pixels of an object that need to be anti-aliased (e.g., pixels along the edge of the object). The data structure for these pixels include information regarding Z values, coverage masks, and multiple pixel color values of the pixels of the object and the adjacent object. When the object has been rendered, the specific pixels are further processed based on the information stored in the fragment buffers to build the final anti-aliasing image.
As is known, the fragment buffer technique may be implemented in a variety of degrees of complexity, producing varying degrees of accuracy. In a fairly complex implementation, the accuracy of the fragment buffer technique is comparable to the over sampling method, but has the advantage that, on average, it requires much less memory to implement. The disadvantages, however, include that there is no upper bound on the size of the fragment buffers, thus they must be sized to handle worst case situations, which, on the average, adds a significant amount of memory. In addition, a dedicated block is required to process the fragment buffers, which adds circuit complexity and increases the size and cost of associated hardware.
The sort dependent anti-aliasing technique renders a three-dimensional object in a pre-determined order based on the Z value of the objects (i.e., the perceived distance from the front of the display). As such, the objects are rendered from the back most (farthest away) pixels to the front most (closest), or vice-versa. As the images are rendered, in this sort dependent order, the edges are smoothed together by blending the pixels on the edges with the pixels of the other objects directly behind it, or in front of it. If this process is done correctly, it is more cost effective, in terms of hardware, than the other techniques and is capable of producing high quality anti-aliasing. This method, however, requires complex three-dimensional objects that intersect other objects to be subdivided into multiple objects that do not intersect with any other object. Such intersecting is done in software, which slows the overall rendering process and consumes more of the central processing unit's time.
Therefore, a need exists for a cost efficient quality anti-aliasing method and apparatus that, at least, overcomes the disadvantages of oversampling, fragment buffers and sort dependent anti-aliasing.