As is known, video graphics circuitry is used in computer systems. Such video graphic circuitry functions as a co-processor to the central processing unit of the computer wherein the video graphics circuitry processes graphical functions such as drawings, paintings, video games, etc., to produce data for display.
In a simple two-dimensional display, the video graphics circuitry received graphical data and commands from the central processing unit and executes the commands upon the graphics data to render the two-dimensional image. Such two-dimensional images may be generated while the central processing unit is performing two-dimensional applications, such as word processing, drawing packages, presentation applications, spreadsheet applications, etc.
Video graphic circuitry is also capable of rendering three-dimensional images. To render three-dimensional images, the central processing unit provides the video graphics circuitry with commands and graphics data. The commands indicate how the video graphic circuitry is to process the graphics data to render three-dimensional images on a display. The graphical data and/or commands may include color information, physical coordinates of object elements being rendered, texture coordinates of the object elements and/or alpha blending parameters. The texture coordinates are utilized to map a texture onto a particular object element as it is rendered in the physical space of the display.
When a three-dimensional image is being rendered as a perspective view, i.e., having a portion of the image appear closer and another portion of the image appear further away, MIP mapping is utilized. MIP mapping provides a plurality of texture maps wherein the first texture map is an uncompressed texture map. A second texture map of the plurality of texture maps is a four-to-one compressed texture map, where the length and width of the first texture map are each divided by two. The third texture map is a sixteen-to-one compressed texture map, where the length and width of the first texture map are each divided by four. The plurality of textures maps continue dividing the length and width of the first texture map by 2.sup.N until the compressed texture map has been compressed to a single texel.
When rendering perspective objects, the rendering circuitry of the video graphics circuit accesses one of the plurality of MIP maps to retrieve the appropriate texel for the current pixel being rendered. As the closer portions of the perspective object are being rendered, the rendering circuitry accesses the less compressed MIP maps (e.g., the uncompressed texture map, the four-to-one texture map, the sixteen-to-one texture map). When the further away portions of the perspective object are being rendered, the rendering circuitry accesses more compressed MIP maps (e.g., sixteen-to-one texture map, thirty-two-to-one texture map, etc.). Rendering perspective objects using various MIP maps works well as long as the perspective scaling in the X and Y direction are equal.
If, however, the perspective compressed ratio is different in the X direction than in the Y direction, the MIP map technique produces less than ideal visual results. Anistropic filtering was developed to improve the MIP mapping technique when the X and Y compression ratios are different for perspective objects. When dealing with uneven compression ratios, the resulting footprint in the texture map for the pixel may be estimated by a parallelogram. The center of the parallelogram is the pixel center. A rectangle of the same area as the parallelogram may be approximated by using the parallelogram's area and height as the rectangle's dimensions. The rectangle is then filtered across to determine the pixel value. Therefore, the short side of the rectangle will be used to determine the MIP map to access, while the ratio between the long and short sides of the rectangle provides the number of samples to reconstruct.
To filter across the rectangle, however, is quite computationally intensive when using a Jacobian process. For example, the number of samples to reconstruct needs to be determined and the location of the samples within the rectangle. In addition, the samples may be prescribed different weighting factors, which need to be determined. Each of the processes is, in themselves, computationally intensive, but when combined, makes such a process cost prohibitive for commercial grade video graphics circuits.
Therefore, a need exists for a method and apparatus that more efficiently performs anistropic filtering.