1. Field of the Invention
The present invention relates to computer systems and particularly to graphics processors. More particularly, the present invention relates to a graphics processor adapted to remove interpolation errors from parameters defining a polygon and render the polygon on a computer display.
2. Background of the Invention
Recent advances in graphics processing technology have allowed computer display devices to deliver higher resolution, greater rendering precision, and faster processing speed. Such advances have enabled computers to better perform the complex instructions demanded by graphics-intensive software applications offering movie playback, interactive video, multimedia, games, drawing or drafting capabilities, and other video-intensive tasks. One important feature of these applications is the capability to quickly and accurately render complex graphic objects on-screen, at the same time incorporating visual effects (also known as "pixel characteristics") such as shading, specular lighting, three-dimensional (3D) perception, texture-mapping, fog or haze effects, alpha blending, depth, and other effects. Such visual effects make the graphics seem more realistic and improve the overall quality of the images.
Shading consists of varying an image color along the span of the image, while the lighting effect is accomplished by multiplying the color intensities of an image by a constant value. Other techniques exist to create 3D effects such as depth and texture-mapping by translating two-dimensional (2D) patterns and shapes so that images appear to have a depth component, even though the images are tendered on a 2D screen. Fog and alpha blending change the appearance of an image in subtler ways. Fog creates the illusion of a mist, or haze throughout the object and may be used in conjunction with other 3D effects to render images that appear to be at far distances. Alpha blending may be used to mesh together, or blend, screen images.
Computer systems typically incorporate raster display systems for viewing graphics, consisting of a rectangular grid of pixels aligned into columns and rows. Typical displays may incorporate screens with 640.times.480 pixels, 800.times.600 pixels, 1024.times.768 pixels, 1280.times.1024 pixels, or even more pixels. The display device is usually a cathode ray tube (CRT) capable of selectively lighting the pixels in a sweeping motion, moving across each consecutive pixel row "scan line"), from left to right, top to bottom. Accordingly, an entire screen of pixel values is known as a "video frame," and the display device usually contains a frame buffer consisting of Dynamic Random Access Memory (DRAM or Video Random Access Memory (VRAM) which holds the pixel intensity values for one or more video frames. The frame buffer, updated regularly by the computer or graphics processor, is read by the display device periodically in order to excite the pixels. Frame buffers in color displays typically hold 24-bit values (3 bytes) for each pixel, each byte holding the pixel intensity value for one of the three primary colors, red, green, or blue. Accordingly, the three primary colors are combined to produce a wide spectrum of colors. Liquid crystal display (LCD) systems operate in a similar fashion as do CRT devices.
The pixel intensity values usually are computed and placed into the frame buffer by a graphics processor that is controlled by a software application known as a display driver. The display driver typically handles all of the graphics routines for the software applications running on the host computer by sending parameters to the graphics processor which describe the geometries of the graphics. One common technique for rendering screen images is to partition the images into simple constituent polygons such as triangles or quadrangles and to then render the constituent polygons on the display. Such a technique has two distinct advantages. First, since even very large polygons can be defined in terms of relatively few parameters, the software driver may send only the necessary polygon parameters, as opposed to transmitting a distinct intensity value for each pixel to the graphics processor. By sending a minimum of data per pixel, the software driver has more time in which to transmit increasingly detailed information to the processor about the polygon, including the parameters to describe the visual effects listed above. In one method of defining a polygon via parameters, the software driver uses the polygon vertex coordinates to calculate through interpolation (or "interpolate") the widths of the polygon along each scan line as well as the slopes of the edges between the vertices. A relatively small number of parameters which completely define the polygon may then be transmitted to the graphics processor to define the polygon for rendering.
Second, graphics processors have been developed which are highly successful at implementing elementary polygon-rendering routines. A typical polygon-rendering algorithm uses an initial polygon coordinate along with the polygon height and width and the slopes of the polygon edges to incrementally render the polygon. Beginning at the initial coordinate, the graphics processor enters into the frame buffer a horizontal line of pixels spanning the width of the polygon on the initial pixel row. Using the initial coordinate along with the polygon height and edge slopes, the graphics processor can compute the polygon coordinates along one vertical or slanted edge, called the "main slope," of the polygon. For each consecutive scan line, the graphics processor then uses the width values of the polygon to draw each horizontal row of polygon pixels into the frame buffer. Such an algorithm is known as the Incremental Line-Drawing algorithm, or Digital Differential Analyzer (DDA).
An incremental algorithm for rendering pixels at discrete positions on a pixel grid generally begins at a starting point and proceeds for some number of iterations, calculating the location of a single pixel during each iteration. The location of the current pixel in the scan line during any iteration is calculated by adding an increment, or delta, to the previous coordinate. The number of iterations needed for one scan line is the number of points in that scan line, or the distance to be spanned. Using such an algorithm, a graphics processor can draw polygons that are random triangles of any orientation or quadrangles with at least one flat top or bottom. Setting aside trivial triangles and colinear triangles, which are either points or lines, any random triangle or quadrangle can be partitioned into upper and lower triangles with a common horizontal side. The common horizontal side intersects the center, or opposite, vertex of the random triangle or quadrangle. The edge of the triangle or quadrangle opposite this center vertex, or the main slope, always spans the entire height of the triangle or quadrangle. The random quadrangle or triangle may be constructed by invoking an Incremental Line-Drawing algorithm twice-first to draw the upper polygon and again to draw the lower polygon.
Referring now to FIG. 1, triangles 100, 120, 140, and 160 represent the four general orientations of a random triangle. Triangle 100 may be partitioned into two constituent triangles 102 and 104 having common horizontal side 106 and opposite vertex 108. Main slope 110 spans the entire height of triangle 100, while first opposite slope 112 and second opposite slope 114 constitute the other two edges. Triangle 100 can be rendered using the Incremental Line-Drawing algorithm by drawing constituent triangles 102 and 104 separately, as will be explained in greater detail below. Triangles 120, 140, and 160 may be partitioned similarly into triangles 122 and 124 (constituting triangle 120), triangles 142 and 144 (constituting triangle 140), and triangles 162 and 164 (constituting triangle 160). Accordingly, these triangles have main slope 130 (triangle 120), main slope 150 (triangle 140), and main slope 170 (triangle 160) with opposite slopes 132 and 134 (triangle 120), opposite slopes 152 and 154 (triangle 140), and opposite slopes 172 and 174 (triangle 160). Triangles 120, 140 and 160 also have opposite vertex 128 (triangle 120), opposite vertex 148 (triangle 140), and opposite vertex 168 (triangle 160).
Examining the triangles from left to right, the main slopes 110 and 170 of triangles 100 and 160, respectively, have downward gradients, while the main slopes 130 and 150 of triangles 120 and 140, respectively, have upward gradients. The opposite vertices 108 and 148 both lie to the left of respective main slopes 110 and 150, while opposite vertices 128 and 168 both lie to the left of respective main slopes 130 and 170, respectively. Hence, triangles 100 and 140 are said to have negative opposite vertex directions, while triangles 120 and 160 are said to have positive opposite vertex directions. Thus, triangles 100, 120, 140, and 160 embody all four combinations of main slope gradients and opposite vertex directions, thereby constituting the four general types of random triangles. It follows that any one of the four triangles 100, 120, 140, and 160 can be uniquely identified by its main slope gradient and opposite vertex direction.
The parameters needed by a graphics processor to render a quadrangle with flat top and bottom edges or any randomly-oriented triangle typically comprise a set of fractional-valued parameters including a starting x-coordinate X.sub.MINT :X.sub.MFRAC, a delta X main .DELTA.X.sub.MINT :.DELTA.X.sub.MFRAC, a starting line width W.sub.MINT :W.sub.MFRAC, and a delta main width .DELTA.W.sub.MINT :.DELTA.W.sub.MFRAC. A software driver transmits the polygon parameters to the graphics processor, which renders the polygon as described below. Each fractional-valued parameter can be expressed as an integer plus a fraction, with the term "INT" denoting the integer portion and "FRAC" identifying the fractional portion. For example, if X.sub.MINT :X.sub.MFRAC =3.25, then X.sub.MINT =3 and X.sub.MFRAC =1/4. For clarity, the fractional-valued parameters X.sub.MINT :X.sub.MFRAC, .DELTA.X.sub.MINT :.DELTA.X.sub.MFRAC, W.sub.MINT :W.sub.MFRAC, and .DELTA.W.sub.MINT :.DELTA.W.sub.MFRAC may be abbreviated as X.sub.M, .DELTA.X.sub.M, W.sub.M, and .DELTA.W.sub.M, respectively, all other fractional-valued parameters expressed herein using similar notation. A graphics processor also receives integer-valued parameters including an initial y coordinate Y.sub.M, a polygon height, and, the rendering direction X.sub.DIR, which defines whether the pixels are drawn from left to right or from right to left across each scan line. In the example of FIG. 1, a graphics engine draws pixels across a scan line from the main slope to the opposite slope, although the pixels may be rendered from opposite slope to main slope in some implementations. By convention, X.sub.DIR may be thought of as negative if the main slope lies to the right of the opposite slope or positive if the main slope lies to the left of the opposite slope, and the graphics processor assigns X.sub.DIR =0 if X.sub.DIR is positive and X.sub.DIR =1 if X.sub.DIR is negative. Notice that the X.sub.DIR parameter corresponds exactly to the "opposite vertex direction" defined with respect to the triangles of FIG. 1. Hence, triangles 100 and 140 have X.sub.DIR =1 (negative) while triangles 120 and 160 have X.sub.DIR =0 (positive).
A drawing algorithm similar to the DDA commonly is used by graphics systems to compute and apply visual effects to the pixels of the rendered polygons. Along with the parameters that describe the polygon coordinates, the display driver transmits to the graphics processor a set of parameters that describe the visual effects, or pixel "characteristics," throughout the polygon. The display driver typically calculates these parameters based on the values of the pixel characteristics at the vertices of the polygon. For instance, to display a polygon with red color, the display driver sends to the processor a starting red color value and a pair of gradient values, one gradient value defining the rate of change of red intensity along the main slope of the polygon and the other gradient value defining the rate of change of red intensity between adjacent pixels on a given scan line. In addition to computing the pixel coordinates using the Incremental Line-Drawing algorithm or the like, the graphics processor uses the starting and gradient parameters to assign a red intensity value to each pixel. The graphics processor typically computes the other pixel characteristics, including blue and green intensity and the other visual effects described previously, in the same manner as and concurrently with the polygon coordinate calculations. In fact, even though the pixel depth value is essentially a spatial characteristic like the x- and y-coordinates, the depth characteristic values are usually calculated in the same manner as the other visual effects, using a starting depth value and two gradient values to incrementally assign depth values to each pixel as the polygon is rendered.
A few problems arise when rendering polygon with visual effects onto a pixel grid, however. First, a pixel grid is inherently discrete, i.e. it is not possible to render images between the pixels of a pixel grid. Hence, although interpolation and other techniques may result in fractional-valued polygon parameters, screen images must be mapped to integer-valued pixel locations. One result of such a mapping is that the outlines of some shapes, notably those with slanted and curved edges, may appear jagged on-screen. Higher screen resolutions mitigate this jagged effect, since pixels which are closer together result in a smaller difference (or "error") between the fractional-valued coordinates of the image and the integer-valued pixel coordinates used to display the image. Another problem with pixel-mapping is that some smooth changes, or monotonic gradients, in visual effects such as gradients in color, lighting, texture, fog, and alpha blending may appear uneven, or banded as a result of the mapping error. For instance, a polygon intended to change smoothly from light red at the top of the polygon to dark red at the bottom of the polygon may actually appear to have horizontal bands of single shades of red. Banding artifacts occur frequently in polygon images with steeply sloping side edges and can distort and ruin the intended appearance of these images.
Visual depth effects may also suffer from mapping errors. Depth effects create the illusion of three dimensions, wherein graphics images displayed on a 2D screen may actually appear as 3D objects. A sense of depth perception can make graphic objects look more realistic. Mapping errors, however, can cause objects which are intended to intersect smoothly along a line in 3D to appear to have a jagged intersection. Texture-mapping as well as other 3D effects may also suffer from this problem.
Because these interpolation errors can severely degrade the quality of computer display images, a number of correction schemes have been proposed. As mentioned above, increasing the screen resolution helps to dilute the effects of jagged lines and curves in 2D shapes. Special drawing techniques have also been used to combat jagged lines, such as unweighted area sampling, scan conversion, and interpolated shading techniques such as Gouraud shading. These enhancements do not prevent pixel characteristics from suffering interpolation errors in some images, however. Coplanar polygons, for example, in which each edge lies in a single plane (in contrast with polygons whose edges are curved in 3D), can exhibit considerable banding and other nonlinear artifacts due to interpolation errors, even when rendered on high-resolution screens and when using special drawing techniques. In particular, these errors are particularly noticeable in polygons with steeply sloping side edges and a large orthogonal (horizontal) gradient in one or more pixel characteristics.
For example, a steep slope in a line implies that that line changes slowly in the x-direction per unit change in y-direction. Because polygons are typically drawn in consecutive scan lines, the rate of change of the line in the y-direction is always one pixel per scan line. Hence, the main gradient slope parameter computed by the display driver more specifically defines the rate of change of the main slope in the x-coordinate. The slope parameter for a steep main slope may therefore have a small fractional component. Since the graphics processor typically rounds the pixel coordinates down before rendering, many consecutive pixels along one edge of a polygon may be rounded to the same x-coordinate. Because that edge is sloped, however, the difference, or "error," between the true x-coordinates and the rounded x-coordinates varies from scan line to scan line. The visual effects added to the pixels by the graphics processor thus become shifted in value by varying amounts, each value shifted by a degree proportional to the interpolation error, caused by rounding, of the corresponding pixel coordinate. This uneven shifting of visual characteristics on consecutive scan lines produces the unintended banding effects and jagged intersections mentioned above. Moreover, such problems occur in any visual effect applied to the pixels, including color, lighting, depth, texture-mapping, fog, alpha, depth, and other visual effects.
For example, FIG. 2 illustrates a shaded polygon 200 to be rendered onto a pixel grid. Because FIG. 2 illustrates polygon 200 as an ideal quadrangle superimposed onto coordinate system 203, the graphics controller must translate the parameters of polygon 200 to fit an integer-valued pixel grid. Parameters X.sub.M and Y.sub.M define the starting x and y pixel grid coordinates from which polygon 200 will be rendered. By convention, X.sub.M and Y.sub.M identify the x- and y-coordinates of the main slope upper vertex, although other implementations may define the lower main slope vertex as the initial point. Since polygon 200 has main slope 201, the coordinate pair (X.sub.M, Y.sub.M)=(2.75, 2) defines initial point 205. Accordingly, X.sub.MINT =2 and X.sub.MFRAC =0.75. The polygon height parameter defines the vertical height of the polygon, determining the number of scan lines needed to render the polygon. For polygon 200, the polygon height is 6 pixels, since the polygon spans rows (or "scan lines") 2 through 7 of the pixel grid 203.
W.sub.M represents the number of pixels along the initial scan line 2 and corresponds to the initial distance between the main slope 201 and opposite slope 202 of polygon 200. For polygon 200, W.sub.M =2.0, since the width between initial point 205 and endpoint 207 along the initial scan line is 2 units. Referring still to FIG. 2, X.sub.DIR =0 for polygon 200, since main slope 201 is situated to the left of opposite slope 202. Finally, the parameter .DELTA.X.sub.M defines the gradient of the main slope in terms of the change in x-coordinate per scan line, while .DELTA.W.sub.M defines the change in the horizontal width of the triangle along the main slope. Thus, .DELTA.X.sub.M and .DELTA.W.sub.M for polygon 200 are -0.25 and +0.25, respectively. Accordingly, .DELTA.X.sub.MINT =0, .DELTA.X.sub.MFRAC =-0.25, .DELTA.W.sub.MINT =0, and .DELTA.W.sub.MFRAC =+0.25.
The pixel characteristics of a polygon may be sent to the graphics processor in a format similar to the polygon coordinate parameters as described above. For each type of visual effect, the graphics processor receives a starting characteristic parameter which defines the value of the pixel characteristic at the initial polygon pixel (i.e., at (X.sub.M, Y.sub.M)), a "delta main" parameter which defines the difference in the characteristic values of adjacent pixels along the main slope of the polygon, and a "delta ortho"parameter which defines difference in the characteristic values of adjacent pixels. These three parameters allow the graphics processor to render polygons with a smooth, or monotonic, change in characteristic values along each scan line.
Still referring to FIG. 2, polygon 200 may be rendered with a gradient in one or more characteristic values. The graphics processor receives a set of parameters for each of the different pixel characteristics, including parameters for red color, green color, blue color, specular red, specular blue, and specular green, depth, and the three texture-mapping coordinates u, v, and w. In the example of FIG. 2, the software driver transmits to the graphics processor parameters R.sub.M =60 (a starting red intensity parameter), .DELTA.R.sub.M =5 (delta red main), and .DELTA.R.sub.O =20 (delta red ortho), which define the desired shading effect along polygon 200. The parameter R.sub.M indicates the initial red color intensity at the starting coordinates (X.sub.M, Y.sub.M). The parameter .DELTA.R.sub.M defines the change in red color intensity between each pixel along the main slope, and .DELTA.R.sub.O defines the change in red color intensity per pixel in the orthogonal (horizontal) direction, or across each scan line. Given these parameters, a graphics processor can compute red color intensity values for each pixel when polygon 200 is rendered onto a display.
If polygon 200 were rendered on an infinitely precise pixel grid, applying the three red color parameters would result in a smooth, monotonic color change across the surface of polygon 200. The numbers in parentheses throughout FIG. 2 indicate the resulting red color intensities at various points on polygon 200. For instance, the red color value is 60 at the starting point 205. After applying the parameter R.sub.M =60 to the starting point 205, the graphics controller can vary the color monotonically along the main slope 201 according to .DELTA.R.sub.M. The points along the main slope 201 thus take red color values of 65 (point 210), 70 (point 215), 75 (point 220), 80 (point 225), and 85 (point 230), a monotonic increase of 5 red color intensity values per main slope pixel. In the orthogonal (horizontal) direction, each point in the interior of polygon 200 and along the opposite slope 202 take on values proportional in .DELTA.R.sub.O to their distances from the main slope points on the same scan lines. For instance, point 207 lies 2 integer units to the right of corresponding main slope point 205, which has red color intensity 60. Because .DELTA.R.sub.O =20 units per x-coordinate, point 207 has a red color intensity of 100, which is 2*.DELTA.R.sub.O =40 units higher than that of point 205. The color gradients along orthogonal scan lines 3 through 7 also exhibit a constant shift of 20 units of color intensity per integer change in the orthogonal direction, as indicated by the red color intensity values in parentheses corresponding to points 210 through 232. Polygon 200 further exhibits a monotonic color gradient of 10 color units per integer change in the vertical direction, as indicated by the colors of point 207 (red=100), point 212 (red=110), point 217 (red=120), point 222 (red=130), point 227 (red=140), and point 232 (red=150) and by the colors of point 206 (red=85), point 211 (red=95), point 216 (red=105), point 221 (red=115), point 226 (red=125), and point 231 (red=135). The vertical gradient follows naturally from the main slope and ortho gradients.
FIG. 3 illustrates the result of a graphics processor using the Incremental Line-Drawing algorithm to interpolate polygon 200 onto pixel grid 303 using the polygon parameters X.sub.M, .DELTA.X.sub.M, W.sub.M, .DELTA.W.sub.M, Y.sub.M, polygon height, and X.sub.DIR. The graphics processor used to draw polygon 300, however, does not include any type of error correction, and the red shading in polygon 300 appears banded rather than monotonic. Upon receiving the polygon parameters from the software driver, the graphics processor first computes the initial pixel values corresponding to point 205. First, the graphics processor determines the x-coordinate of initial pixel 305 on scan line 2 by rounding X.sub.M down to the nearest integer. Thus, pixel 305 is drawn at (x, y)=(2, 2). The red color intensity for initial pixel 305 is R.sub.M =60, by definition. Since X.sub.DIR =0, the graphics engine renders the remaining pixels in the positive direction (to the right) across the initial scan line. Because the initial scan line width W.sub.M =2.0, pixels 306 and 307 are rendered to complete the initial scan line. The graphics processor determines the red color value for each pixel by adding delta red ortho (.DELTA.R.sub.O =20) to each of the preceding pixels. Thus, pixel 306 has a red color value of 80, and pixel 307 has a red color value of 100.
After completing the initial scan line, the graphics processor advances to scan line 3 and computes the main slope x-coordinate by adding .DELTA.X.sub.M to the previous main slope x-coordinate. Thus the new x-coordinate is 2.75-0.25=2.50, and, rounding down the x-coordinate, the graphics engine draws a new main slope pixel 210 at (x, y)=(2, 3). The red color value for pixel 310 may be determined by adding .DELTA.R.sub.M =5 to the red value of the previous main slope pixel 305. Thus, pixel 310 has red color value 65. The red intensity values along scan line 3 are determined by adding .DELTA.R.sub.O =20 to the value of the preceding pixel. Hence, pixels 311 and 312 have red color intensities 85 and 105, respectively. The graphics processor continues to compute the pixel coordinates and red color values in this manner, rendering each consecutive row of pixels from scan line 4 through scan line 7. Accordingly, main slope pixels are assigned red color values of 70 (pixel 315), 75 (pixel 320), 80 (pixel 325), and 85 (pixel 330).
Because no error correction was used for polygon 300, however, the red color values appear banded. While the red color gradient in polygon 200 was smooth, the red color "jumps" between scan lines 5 and 6. For example, the red color values for pixels 305 (red=60), 310 (red=65), 315 (red=70), and 320 (red=75) progress gradually in steps of 5. The difference between pixels 320 and 326, however, is 25 units of red intensity. This same 25-unit shift in color gradient is also evident between pixels 321 and 327 and between pixels 322 and 328. Thus, instead of having a smooth color gradient throughout, polygon 300 appears to have two distinct red bands.
The source of this banding effect lies in the difference, or error, between the fractional-valued x-coordinates of polygon 300 and the integer-valued pixels which the graphics processor actually renders. Notice that the difference between the x-coordinate of pixel 305 (x=2.75) and the actual, rendered location of pixel 305 (x=2) is 3/4 pixel but that ideal starting color and actual starting color are both 60. Thus, the software driver calculated the initial red color value as if the first pixel 305 would be rendered at x=2.75. However, the graphics processor rounded each x-coordinate along scan line 2 by 3/4 of a pixel. Comparing the ideal quadrangle of FIG. 2 to the rendered pixels of FIG. 3, it can be seen the rendered pixels along scan line 2 of FIG. 3 have the wrong color values for the x-coordinates at which they were rendered. For instance, point 206 of FIG. 2 was intended to have red color value 85. The pixel in FIG. 3 corresponding to the coordinates of point 206 in FIG. 2, however, has red color 100, a difference of 15 color values. Likewise, the graphics processor rounded the x-coordinate of each pixel on scan line 3 by 1/2 pixel, effectively shifting each color value on scan line 3 by 10 color values. Also, the graphics processor rounded the x-coordinate of each pixel on scan line 4 by 1/4 pixel, effectively shifting each line 4 red color value by 5 color values. Because the fractional x-coordinate for scan line 5 (x=5.0) had no fractional portion, the x-coordinate for pixel 320 needed no rounding. Therefore, the red color values were not shifted on scan line 5. Comparison of pixel 320 with the corresponding pixel 220 of polygon 200 verifies that both pixels have the same color value. It should be noted that although the example of FIG. 3 is directed toward the error induced when shading a polygon with the color red, a similar banding effect may occur with respect to any pixel characteristic that is applied to the polygon. Hence, the example FIG. 3 is representative of interpolation error that may occur in green color, blue color, specular red, specular green, specular blue, u-texel, v-texel, w-texel, alpha, fog, depth, and other pixel characteristics.
One solution to such a problem has been to implement an error-correction algorithm that selectively alters the visual characteristics along each scan line. U.S. Pat. No. 5,625,768 assigned to Cirrus Logic, Inc. discloses a display driver that both generates polygon rendering parameters and calculates error adjustment terms for each pixel characteristic. The error adjustment terms are transmitted to a graphics processor along with the normal pixel parameters and stored into a register file. To compute the pixel characteristic values, the graphics processor first uses a set of interpolation circuits to compute an uncorrected version of each pixel characteristic. However, these uncorrected pixel characteristic values are subject to interpolation errors, as discussed previously. With the error adjustment terms stored in the register file, the graphics processor uses a second set of interpolation circuits to compute the accumulated error for each pixel, adjusting each pixel characteristic to correct the interpolation error. Hence, the graphics processor essentially renders the pixels and then uses the error terms to correct the error that occurred while rendering the pixels.
The corrected values allow the display to avoid the visual defects noted earlier, such as banding and 3D intersection problems. However, such a configuration requires a very complex display driver, which places an extra burden on the host computer system. In addition, the software driver must transmit extra correction parameters to the graphics processor, requiring roughly twice the amount of communications bandwidth between the computer and the graphics processor as is needed to transmit polygon parameters only. Further, the graphics processor requires a large amount of register file space to store both the polygon parameters and the error correction parameters.
In addition, the graphics processor uses an error correction circuit coupled to each pixel characteristic interpolator, the error correction circuit adapted to correct each characteristic of a pixel after that characteristic has been calculated. Such a configuration not only results in an excessive amount of hardware but also increases the amount of calculation time, or latency, required to generate the pixel characteristic values, since the pixel characteristic value must essentially pass through two computations: one calculation to determine the uncorrected pixel value, and another calculation to correct the pixel value for interpolation error.
In light of the foregoing reasons, there remains a need for an effective yet efficient error correction system capable of adjusting pixel characteristic values in polygons rendered by a graphics processor. Such a system should be capable of rendering a polygon from a standard set of interpolation parameters without excessive hardware complexity or latency. Further, such an error correction system should integrate seamlessly with the CPU, using minimal computer memory while requiring as few CPU calculations as possible. To date, no such system is known that incorporates such features.