1. Field of the Invention
Embodiments of the present invention relate generally to computer graphics and more specifically to systems and methods for smooth transitions to bi-cubic magnification.
2. Description of the Related Art
Modern three-dimensional (3D) computer graphics systems typically render one or more 3D graphics images that are either displayed on a computer monitor or stored in memory for later use. The content of a 3D graphics image is generated from a set of 3D geometric objects that are stored and manipulated within a graphics application. The 3D geometric objects may include algorithmically generated shapes, arbitrary shapes, meshes, quads, triangles, lines, points and other related types of objects that may exist within a 3D environment. Three-dimensional geometric objects with non-zero surface area are typically modeled for display using triangles. In fact, 3D objects that are not natively composed of triangles, such as spheres or cylinders, are commonly manipulated in their native form and then tessellated into triangles for display.
While the geometric shapes of 3D objects within a particular graphics scene provide the structure within a 3D image, much of the visual richness and realism associated with modern graphics systems actually results from texturing mapping operations performed on the 3D objects. Texture maps include, without limitation, surface textures and lighting maps. Surface textures represent the color pattern of the surface of an object, such as blades of grass on a lawn or bricks on a brick wall. Lighting maps represent two-dimensional intensity maps of light projected onto an object. Combining different texture mapping effects on 3D objects enables complex color patterns to be applied to an underlying geometric shape, thereby resulting in a greater degree of visual realism. For example, two triangles can be used to form the rectangular shape of a wall within a 3D image. Such a simple rectangular wall is given a much more convincing real-world appearance when a brick wall image is texture mapped onto the two triangles forming the wall. A lighting map may add additional realism by illuminating the surface of the brick wall with a realistic lighting pattern, from a street lamp, for example.
One aspect of rendering a 3D image is the placement of a view port, or camera view, within the 3D scene. The camera view is generally independent of the position of the 3D objects within the scene, thereby allowing the camera to view an object at an arbitrary distance. When the camera is far away from an object, a large number of texture map texels usually map to one screen pixel. To avoid a large computation load associated with filtering the large number of texels to one screen space pixel each time a 3D image is rendered, the texture map is commonly stored in a pre-filtered form known in the art as a MIP (“multum in parvo,” or “much in a small space”) map.
As is well-known, each MIP map includes pre-filtered versions of the original texture map image, starting with the original (highest) resolution texture map and progressing through a series of lower resolution texture maps. A MIP map commonly includes a set of map images ranging in size from the highest resolution map image (i.e., including the greatest number of texels) down to a map image that is 1×1 texel in size. Each map image within a MIP map is associated with a certain level of detail (LOD). The highest resolution map image available within a MIP map is commonly referred to as map level 0 or “LOD 0.” The next lower resolution map image is said to have map level 1, or LOD 1, and so on. The level of detail associated with a particular pixel sampled from a MIP map is commonly determined by the mapping of the pixel-to-pixel sampling gradient within the texel space. That is, an increment of one pixel in either the vertical or horizontal direction in screen space maps to an increment size within the texture map. Texture mapping generally includes selecting an LOD level that approximately matches the screen space pixel stride to a one texel stride within the selected LOD level. The maximum of the vertical and horizontal sampling gradients is one way to select an LOD level. A function that blends the vertical and horizontal sampling gradients is another way to select an LOD level.
Bilinear filtering, also known as “filter-2,” is a well-known technique that involves sampling one LOD level of a MIP map using a sampling kernel of 2×2 texels. One weight is applied to each of the four samples, according to the fractional position of the sample point within the set of 2×2 texels. The sum of the four weights adds up to one (1.0) in order to maintain a proper average total intensity for the set of 2×2 texels. The weighted contributions are added together to determine the value of the bilinear sample. Each channel (red, green, blue, alpha) is typically computed independently. Tri-linear filtering is a well-known technique that involves performing a bilinear filter operation on a MIP map at a computed LOD level and again performing a bilinear filter operation on the same MIP map at the next higher LOD level (next lower resolution map image). The bilinear samples from the two LOD levels are then blended together with the blending weights determined by a fractional LOD value. Tri-linear filtering blends bilinear samples together to avoid a visually distinct boundary that may appear on a 3D object between two different LOD levels. While well-known techniques such as bilinear and tri-linear sampling, which use a sampling kernel of 2×2 texels, produce good results for minified samples (i.e., above LOD 0), these techniques produce fairly low quality results when applied to highly magnified texture maps (i.e., below LOD 0). A texture map may become highly magnified if, for example, the camera is positioned very close to a texture mapped object, so that each texel maps to many screen space pixels. Filter-4 filtering uses a 4×4 texel sampling kernel, with weights selected according to the position of the sampling point within the 4×4 region. The weights need not be positive, but should add up to a total of one (1.0). The sixteen texel samples are weighted according to their respective computed weights. A bi-cubic filter is a well-known filter-4 filter that typically produces very high quality results in both minified and highly magnified scenarios.
One solution to improve the quality of texture mapping in the magnified case is to use a technique such as bi-cubic sampling that uses a larger (i.e., 4×4) sampling kernel. However, the consequences of using a larger sampling kernel include a substantial increase in memory bandwidth and potentially a substantial degradation in rendering performance. Most pixels within a typical graphics image are minified, and therefore do not noticeably benefit from such a larger sampling kernel. This solution imposes a performance penalty that applies to all the pixels in a rendered graphics image, even though only a minority of the pixels typically benefit.
As the foregoing illustrates, what is needed in the art is a filtering technique for texture mapping that can be implemented with both minified and magnified texture maps, but without the potential performance penalties associated with using a larger sampling kernel.