The addition of texture patterns to computer generated graphic images is a significant enhancement that is useful in a wide variety of visual image generation applications.
In a computer generated image, picture elements (pixels) of the display on which the image is displayed have associated with them two-dimensional coordinates ("screen coordinates"). These screen coordinates uniquely identify each pixel in the display. Each screen coordinate then has associated with it values, such as red, green and blue values ("RGB values") which define the appearance of the pixel. Polygons or objects may then be generated by defining the color or intensity values for each pixel based on the screen coordinates of the pixel.
The addition of texture to a polygon in a computer generated image may use perspective transformation information together with texture mapping information to create values for each pixel reflecting a two-dimensional representation of a textured surface in three dimensional space. One method of texturing object in a computer generated image is through the use of what is referred to in the art as a MIP-MAP such as is described by Williams in an article entitled Pyramidal Parametrics, Computer Graphics, Vol. 17, No. 3, pp. 1-11 (1983).
In the texturing process, pixels corresponding to each polygon in screen coordinates are provided with coordinate values within texture space (u,v) and with a level of detail ("LOD") value. The LOD represents the area of a pixel in the texture space and will ultimately be reflected in the selection of MIP MAPS (texture maps) for texturing. The resultant set of (u,v and LOD) points correspond to redefined color and intensity values or "texels" defined within a texture space.
The term perspective transformation is used to denote the process of computing an object's instantaneous orientation in relationship to a viewer of a graphical image. The perspective transformation of the objects of a scene defines an image composed of polygons which are defined in the x, y space of screen coordinates. Perspective transformation produces a matrix of polygon vertices specified with u, v and LOD values.
The result of perspective projection is to convert from three dimensional space to x-y two dimensional space with certain information also being determined which is dependent on the third dimension such as u, v and LOD. Typically, the LOD of a given texel, pixel or polygon is determined in the rendering steps of an application or system program module taking into account the distance and angle of view of the textured surface. The levels of detail of a texture map are conventionally precomputed for later access during rendering.
Displaying a textured pixel value typically requires that the different intensity values and other contributory qualities, such as illumination and shading, be calculated on the basis of a pre-defined texture map. The (u,v) and LOD values that define the position in texture space can be in fractional form, where none of the three values correspond to an exact pre-defined texel coordinate map value.
If the fractional part of the texture space mapped pixel address is simply truncated for the look-up of the texel value, then certain anomalies may occur in the computed image. The anomalies include unnatural variations in the appearance of the texture pattern in successive frames of an animated sequence. To avoid these anomalies, conventional methods have calculated the exact RGB or YIQ intensity values for each pixel on the display screen by accessing a set of individual, predefined texel intensity values that are stored in dedicated texture map memory. Typically, the four most proximal points are selected from each of the two proximal level of detail planes of each of three contributory texture maps (e.g. a red contribution map, a green contribution map, and a blue contribution map). Thus, a total of eight red, eight green and eight blue values are accessed for each computed pixel. The polygon pixel contribution values are generated by blending the eight sampled texture map points through interpolation. In the case of a system using RGB components, the interpolation is carried out in each of the three component color maps, and the results are used together as the resultant color intensities for display by an individual pixel on a graphics screen.
Accessing the texture map texel values, that generally must be obtained repeatedly from a memory that is off the chip that does the interpolation, may be time consuming in the context of high quality graphical image generation. Managing texture information via conventional techniques and with conventional hardware arrangements, therefore, is well known as being expensive and burdensome to a process that requires rapid generation of superior computer graphic images.
Methods of providing video images are described in U.S. Pat. No. 4,905,164 to Chandler et al. An expressed object of Chandler et al. is to obtain color cell texture modulation while minimizing hardware requirements. Another attempt to reduce the hardware costs of texture processors is described in a Sims et al. U.S. Pat. No. 4,586,038. This method involves the use of texture and shading gradients in three dimensional space to define texture modulations. Similarly, Merz et al., U.S. Pat. No. 4,692,880, and U.S. Pat. No. 4,965,745, to Economy et al. also describe image texturing.
As discussed above, in light of the increasing emphasis on higher quality computer generated images, particularly images with texturing, a need exists for efficient computation of textures for visual display.