1. Field
The present application generally relates to computer generated graphics, and, more particularly, to tinting a surface to simulate a visual effect in a computer generated scene.
2. Related Art
Techniques commonly known as computer generated imaging (CGI) can be used to simulate a broad range of digital environments including visual effects, characters or entire scenes in a digital cinematographic production. Typically computer generated objects are created using a modeling technique giving the appearance of physical objects in a computer generated scene. The computer generated objects can be manipulated digitally to tell a story or represent some visual effect. Such modeling techniques are commonly used in areas such as graphic arts, computer games and cinematographic production.
In order to produce realistic images and effects, computer generated scenes are often rendered using one or more simulated light sources. Image rendering simulates the complex physical interactions between light and surfaces of objects in a scene using mathematical techniques sometimes referred to as shaders. When using a shader, optical phenomena such as diffuse and specular reflection, and surface texture are simulated using a bidirectional reflectance distribution function (BRDF). Using various factors, such as the geometry of a planer surface, the location and color of a light source, surface properties and the location of the receiving camera or eye, a BRDF can be used to simulate how a surface would appear in an actual physical environment. The degree of realism in a computer graphics image is largely dependant on the modeling accuracy and complexity of the shaders. Many computer rendering processes use multiple shaders to achieve a photorealistic result.
In addition to producing a realistic image, a rendering process can be used to produce certain visual effects. As one example, low lighting conditions can be simulated in a computer generated scene through the use of lighting sources with a primarily blue color or hue. The human eye processes light in a different way in low lighting conditions. Sometimes referred to as a scotopic effect, the rod sensors in the eye dominate over cone sensors in low lighting conditions. Thus, low light vision tends to be monochromatic, and a color shift may be perceived. However, there are certain problems that arise when simulating this scotopic effect in a computer generated scene. Because a “night” light source is composed of primarily blue colors, red objects appear faint or not visible at all. This is due to the fact that the aforementioned BRDF mimics physical light properties by “absorbing” the incident blue light and reflecting little or no red light to the observer. This effect is most pronounced when the lighting color and the object color are two different highly saturated primary colors (such as blue and red).
To produce certain visual effects, a rendering process can use multiple surfaces to represent a single object. Using this technique, one surface with the original object color can be used in one lighting condition, and another surface representing the same object with a different color can be used for another lighting condition. While this technique can achieve the desired visual effects, the use of multiple surfaces substantially increases the complexity of the model and creates a significant amount of work at various stages of a production process.