Computer graphics systems commonly are used for displaying graphical representations of objects on a two dimensional computer display screen. Current computer graphics systems can provide highly detailed representations and are used in a variety of applications.
In typical computer graphics systems, an object to be represented on the computer display screen is broken down into a plurality of graphics primitives, a triangular example of which is shown in FIG. 1 and designated by reference numeral 11. Primitives 11 are basic components of a graphics picture and may include points, vectors (lines), and polygons, such as the triangular primitive 11 of FIG. 1. Each triangular primitive 11 is made up of spans 12 of picture elements 13 (pixels). Hardware and/or software is implemented to render, or draw, on the two-dimensional display screen, the graphics primitives 11 that represent the view of one or more objects being represented on the screen.
The primitives 11 that define the three-dimensional object to be rendered are typically provided by a central processing unit (CPU), which defines each primitive 11 in terms of primitive data. For example, when the primitive 11 is a triangular primitive 11, then the CPU may define the primitive 11 in terms of the x', y', z' pixel coordinates (unnormalized orthogonal coordinate system) of its vertices, as well as the color values (R, G, B values) of each vertex. Rendering hardware interpolates the data from the CPU in order to produce the x, y, z screen coordinates (normalized orthogonal coordinate system) corresponding with the pixels 13 that are activated/deactivated to represent each primitive 11 and the color values (R, G , B values) for each of the screen coordinates x, y, z.
Early graphics systems failed to display images in a sufficiently realistic manner to represent complex three-dimensional objects. The images displayed by such systems exhibited extremely smooth surfaces, absent textures, bumps, scratches, shadows, and other unrealistic surface details. As a result, methods were developed to display images with improved surface detail. Texture mapping is one such method that involves mapping a source image, referred to as a texture, onto a surface of a three-dimensional object, and thereafter mapping the textured three-dimensional object to the two-dimensional graphics display screen to display the resulting image. Surface detail attributes that are commonly texture mapped include, for example, color, specular reflection, transparency, shadows, surface irregularities, etc.
Texture mapping involves applying one or more texture map elements, or texels, of a texture to each pixel 13 of the displayed portion of the object to which the texture is being mapped. Each texel in a texture map is defined by coordinates (generally two or more spatial coordinates, e.g., s, t, and, sometimes, a homogeneous texture effect parameter q) which identify its location in the texture map (two-dimensional or greater). For each pixel 13, the corresponding texel(s) that maps to the pixel 13 is accessed from the texture map via the texel coordinates (e.g., s, t, q of an orthogonal coordinate system) associated with that pixel 13 and is incorporated into the final R, G, B values generated for the pixel 13 to represent the textured object on the display screen. It should be understood that each pixel 13 in an object primitive may not map in a one-to-one correspondence with a single texel in the texture map for every view of the object.
Texture mapping systems typically store data in memory representing a texture associated with the object being rendered. As indicated above, a pixel 13 may map to multiple texels 15. If it is necessary for the texture mapping system to read a large number of texels 15 that map to a pixel 13 from memory to generate an average value, then a large number of memory reads and the averaging of many texel values would be required, which would undesirably consume time and degrade system performance.
To overcome this problem, a well known scheme has been developed that involves the creation of a series of MIP (multum in parvo, or many things in a small place) maps for each texture, and storing the MIP maps of the texture associated with the object being rendered in memory. A set of MIP maps for a texture includes a base map that corresponds directly to the texture map as well as a series of related filtered maps, wherein each successive map is reduced in size by a factor in each of the texture map dimensions (s, t, which may differ). In a sense, the MIP maps represent different resolutions of the texture map.
An illustrative example of a set of MIP maps is shown in FIG. 2 In this simplified example, the MIP maps of FIG. 2 are two dimensional (s, t) and include a base map 14a (the reference) that is eight-by-eight texels 15 in size, as well as a series of maps 14b, 14c, and 14d that are respectively four-by-four texels 15, two-by-two texels 15, and one texel 15 in size. The four-by-four map 14b is generated by box filtering (downsampling) the base map 14a. With box filtering, each texel 15 in the map 14b corresponds to an equally weighted average of four adjacent texels 15 in the base map 14a. Further, the two-by-two map 14c is similarly generated by box filtering map 14b. Finally, the single texel 15 in map 14d is generated by box averaging the four texels 15 in map 14c.
The computer graphics system determines which MIP map 14 in a series of MIP maps 14a14d to access in order to provide the appropriate texture data for a particular pixel 13 based upon the number of texels 15 to which the pixel 13 maps. For example, if the pixel 13 maps in a one-to-one correspondence with a single texel 15 in the texture map, then the base map 14a is accessed. However, if the pixel maps to four, sixteen, or sixty-four texels, then the maps 14b, 14c and 14d are respectively accessed because those maps respectively store texel data representing an average of four, sixteen, and sixty-four texels 15 in the texture map.
In order to determine the number of texels 15 to which a pixel 13 maps so that the appropriate MIP map 14 can be accessed, gradients (mathematical derivatives) of the various texel coordinates with respect to screen coordinates are computed. In this regard, gradient values .differential.i/.differential.x, .differential.i/.differential.j, where i is s, t, and/or q in the texel domain and where x, y are screen coordinates, are calculated. These gradients reflect the rate of change of texture coordinates 15 relative to pixel coordinates 13. Often, a single gradient is allocated to each pixel 13 by selecting the largest gradient.
Prior methods for determining the gradients rely on using some form of either a linear difference formula or a central difference formula The former is more popular than the latter due, in part, to its simplicity and ease of implementation.
With the linear difference formula, each gradient derivative is essentially equal to an old gradient derivative plus a constant. Given the gradient at the vertices of a triangular primitive 11, the gradients along the edges as well as along the spans 12 of the primitive 11 are linearly approximated.
When the central difference formula is employed, each gradient derivative is essentially equal to a weighted sum of nearby gradient derivatives. For more details regarding the use of the central difference formula, see A. Watt and M. Watt, Advanced Animation and Rendering Techniques, Addison-Wesley, pp. 300-301 (edition 1995).
Although meritorious to an extent, these methods for calculating gradients are inaccurate, especially for highly spatially perspected rendered primitives 11. The larger the primitive 11, the more spatially perspected and the greater the error. Furthermore, these methods are computationally complex for the degree of accuracy that is accomplished.
Thus, an unaddressed need exists in the industry for a more efficient system and method for determining precise gradients in a texture mapping system of a computer graphics system.