The present invention relates to the field of computer graphics. Many computer graphic images are created by mathematically modeling the interaction of light with a three dimensional scene from a given viewpoint. This process, called rendering, generates a two-dimensional image of the scene from the given viewpoint, and is analogous to taking a photograph of a real-world scene.
As the demand for computer graphics, and in particular for real-time computer graphics, has increased, computer systems with graphics processing subsystems adapted to accelerate the rendering process have become widespread. In these computer systems, the rendering process is divided between a computer's general purpose central processing unit (CPU) and the graphics processing subsystem. Typically, the CPU performs high level operations, such as determining the position, motion, and collision of objects in a given scene. From these high level operations, the CPU generates a set of rendering commands and data defining the desired rendered image or images. For example, rendering commands and data can define scene geometry, lighting, shading, texturing, motion, and/or camera parameters for a scene. The graphics processing subsystem creates one or more rendered images from the set of rendering commands and data.
Texture mapping is the process of applying color or transparency information in a two-dimensional image, referred to as a texture map or texture, to all or a portion of a surface in three-dimensional space. Visually, texture mapping “wraps” a flat image on to a three-dimensional surface or object.
One problem with texture mapping arises from alias artifacts. Rendered images are typically comprised of a number of individual pixels, each having a color value determined from a corresponding portion of three-dimensional scene. For pixels corresponding to the surface with the texture map, each pixel may represent any number of texture map pixels (or texels), depending upon the orientation of the surface with reference to the viewer. For example, when a surface is close to the viewer, multiple pixels of a rendered image may correspond with a single texel. This is referred to as texture magnification. Conversely, when the surface is very distant from the viewer, a pixel may correspond with multiple texels. This is referred to as texture minification. As a texture is minified, if the pixel does not sample and filter the texture map appropriately for a given surface orientation, visual aliasing artifacts will appear in the rendered image.
Prior texture filtering techniques, such as trilinear mipmapping, compute an array of prefiltered representations of the texture known as a mipmap pyramid. Each successive image in the array, or “mipmap level” is a representation of the original texture image isotropically filtered to a constant degree of minification such that each texel in the mipmap level is a weighted average of the corresponding texels of the base miplevel for a pixel at the corresponding degree of minification, or “level of detail”. When rendering pixels with the mipmap, the renderer computes the degree of minification, or “level of detail” which conservatively approximates the footprint of the pixel in texture space and computes a filtered texture by fetching and weighting, and accumulating texel values from mipmap or mipmaps that have closest corresponding level of detail. This effectively computes a weighted average of the texels of the original texture image using a small number of fetch and filter operations from a precomputed mipmap instead of fetching and filtering a potentially large number of texels from the original image. Isotropic filtering approximates the pixel footprint as a circle in texture image space and as such filters texture maps equally in all directions. These techniques address the problem of sampling minified textures for textured surfaces roughly parallel to the image plane. However, these techniques perform poorly for surfaces having a large range of depth values over the surface, such as surfaces orientated perpendicular to the image plane. Isotropic filtering gives textures on these surfaces a blurry or out of focus appearance.
Anisotropic texture filtering, or filtering textures unequally in different directions, prevents aliasing and avoids the blurring introduced by the overly conservative approximation of the pixel footprint employed by isotropic texture filtering. Unfortunately, prior anisotropic filtering techniques substantially reduce rendering performance. Footprint assembly techniques use a more accurate approximation of an elliptical pixel footprint by taking multiple isotropic probes along the major axis of the ellipse. Anisotropic filtering via footprint assembly requires multiple isotropic probes at higher resolutions than isotropic filtering which only requires isotropic probe. Thus anisotropic filtering via footprint assembly is considerably slower than isotropic filtering. For example, footprint assembly techniques combine a large number of isotropic probes to compute a single anisotropic filter value. The large number of isotropic probes required, increases the number of texture memory accesses, thereby decreasing rendering performance. Ripmapping, another known technique, computes the complete set of rectangular mipmap pyramids with different aspect ratios at powers of two (for example, a 16×16 texture map would require a ripmap consisting of mipmap pyramids of 16×8, 16×4, 16×2, 16×1, 8×16, 4×16, 2×16, and 1×16). Filtering a footprint with ripmaps involves computing the squash, or ratio of horizontal to vertical minification of the pixel footprint in the texture space and taking a single isotropic probe from the appropriate mipmap pyramid. The large number of mipmap pyramids for each texture requires a large amount of additional texture memory. For example, using ripmapping increases the amount of texture memory required by four. Additionally, ripmapping is typically ineffective if the line of anisotropy of the pixel footprint is not aligned with a direction of anisotropy.
It is therefore desirable for a system and method to improve the performance of anisotropic texture filtering. It is also desirable to minimize the amount of extra texture memory required for anisotropic texture filtering and to optimize anisotropic texture filtering for specific applications. It is further desirable to improve the performance of anisotropic texture filtering regardless of the direction of anisotropy.