Recent advances in computer performance have enabled graphic systems to provide more realistic graphical images using personal computers and home video game computers. In such graphic systems, some procedure must be implemented to “render” or draw graphic primitives to the screen of the system. A “graphic primitive” is a basic component of a graphic picture, such as a polygon, e.g., a triangle, or a vector. All graphic pictures are formed with combinations of these graphic primitives. Many procedures may be utilized to perform graphic primitive rendering.
Early graphic systems displayed images representing objects having extremely smooth surfaces. That is, textures, bumps, scratches, or other surface features were not modeled. In order to improve the quality of the image, texture mapping was developed to model the complexity of real world surface images. In general, texture mapping is the mapping of an image or a function onto a surface in three dimensions. Texture mapping is a relatively efficient technique for creating the appearance of a complex image without the tedium and the high computational cost of rendering the actual three dimensional detail that might be found on a surface of an object.
Prior Art FIG. 1A illustrates a primitive 10 which encompasses a plurality of pixels 12 that may be texture mapped. In use, a texture may be mapped to each pixel 12 by determining the texture coordinates 14 thereof, and looking up texture information (T) that may be mapped to the pixel 12 based on the texture coordinates 14.
In order to increase the realism of the texture information, it is important that the texture information be properly lighted with diffuse and specular lighting. To accomplish this, various diffuse and specular light values associated with each vertex 16 of the primitive 10 may be used. Specifically, a particular diffuse light value (LD) and a specular light value (LS) may be calculated which are interpolations of the diffuse and specular light values associated with each vertex 16 of the primitive 10. Once the interpolated light values (LS) & (LD) are calculated, they may be multiplied by the texture information (T) looked up for the particular pixel 12. Note Equation # 1. By this operation, the texture information is properly lighted.LD*T+LS*T  Equation # 1 
Since there are various other types of lighting and colors associated with textures other than diffuse and specular lighting, the foregoing equation is limited. In particular, by allocating only two sets of values (i.e. LS & LD), the resultant texture information may not be properly lighted with other types of lighting such as bump mapping, reflection mapping, etc.
Further, it should be noted that the light values associated with each vertex 16 of the primitive 10 must range between 0 and 1, in accordance with OpenGL® and other standard interfaces. However, light values traditionally do not range from 0 and 1, but rather 0 to infinity. As such, the light values must be clamped within the acceptable range of 0 to 1. Unfortunately, this clamping may limit the manner in which the texture information may be properly lighted since the multiplication of a light value that varies between 0 and 1 and a texture value modeling the surface attenuation that varies from 0 to 1 will always darken the texture. Also, as lights brighten, and each of its components get clamped to 1, all color information is removed from the light, causing the light to become white instead of a brighter version of its own color, causing discoloration on the lit surface.
FIG. 1A-1 illustrates the foregoing problem associated with the prior art. Two toruses 11 are shown to have a gray industrial surface texture, and are lit by a very bright yellow light. The left torus has the light's values clamped from 0-1 before modulating the texture, thus losing much of its brightness and almost all color. The right torus uses a light with more dynamic range and clamps to 0-1 after modulation, increasing the brightness and retaining the color of the light. This matches reality much more closely, where a light with huge dynamic range hits an object, is attenuated by the surface, and then viewed by a sensor (be it camera or eye), which clamps the intensity of the received light to the sensor's range.