The present invention relates to the field of computer graphics, and in particular to methods and apparatus for optimizing the evaluation of functions associated with surfaces. Many computer graphic images are created by mathematically modeling the interaction of light with a three dimensional scene from a given viewpoint. This process, called rendering, generates a two-dimensional image of the scene from the given viewpoint, and is analogous to taking a photograph of a real-world scene. Animated sequences can be created by rendering a sequence of images of a scene as the scene is gradually changed over time. A great deal of effort has been devoted to making realistic looking rendered images and animations.
Procedurally-generated data is often used to provide fine details in computer graphics images and animation. For example, hair, fur, grass, trees, rocks, and other elements of a scene may be generated procedurally. Procedures, which include shading programs, referred to as shaders, script programs, stand-alone programs, and dynamically loaded programs, can generate large amounts of data from a relatively small amount of input data. Thus, a user can provide a small amount of input parameters to a procedure to add a large amount of detail to a scene, rather than adding this detail manually. Procedures can specify scene or object geometry, texture maps, lighting and shading properties, animation, or any other attribute of a scene. For example, a user can add millions of hairs to an animated character by specifying a few parameters of a hair generation procedure, rather than modeling and animating millions of hairs by hand Thus, creating large data sets procedurally is often more efficient and cost effective than having a modeler, artist, or animating manually create data.
Because procedures can add large amounts of detail to a scene, it is important to carefully control their application to prevent too much data from overwhelming the renderer. If there is too much detail in a scene, a renderer may take too much time and computational resources to generate images, or even worse, crash the rendering system.
Some renderers create images in small pieces, such as scanlines or “buckets” of adjacent pixels. For example, a scanline might include all of the pixels in a row of an image and a bucket might include a 16 by 16 group of pixels. Renderers can often process each scanline, bucket, or other piece of the image independently, allowing rendering to be highly parallelized. For each piece of an image, a renderer processes the scene data to determine which portions of the scene are potentially visible to that piece. The potentially visible portions of the scene for each image piece are then evaluated to determine the color, transparency, and/or other values for each pixel in the image piece.
One way to control the application of procedurally generated data is to specify a bounding box or other bounding region or volume. A bounding box defines a conservative estimate of the portion of the scene potentially including the procedurally generated data. A bounding box can be defined as a three-dimensional space within the scene or as a two-dimensional region within a corresponding image. During rendering, the renderer will determine whether a bounding box is visible to the current camera viewpoint and/or to the current scanline, bucket, or other piece of the image. If so, the bounding box will be “cracked open” and the procedure associated with the bounding box will be executed. This will generate the desired procedural data. The newly generated procedural data is added to the set of scene data to be processed by the renderer. The renderer will then process the procedurally generated data in the same or similar manner as the rest of the data in the scene. Conversely, if the bounding box is not visible from the current camera viewpoint and/or to the current scanline, bucket, or other piece of the image, the procedure will not be executed and its corresponding procedural data will not be added to the scene. This reduces the amount of data unnecessarily processed by the renderer, which decreases the time and computational resources required for rendering.
One problem with bounding boxes is specifying the optimal bounding box size. If the bounding box is too small, then the renderer will “miss” the bounding box and procedurally generated data that should be in the scene will be missing. Conversely, if the bounding box is too big, the performance benefits of using a bounding box are eliminated. For example, if the bounding box is too large, buckets or other pieces of the image will often intersect with or “see” the bounding box even when the geometry and other scene data contained within the bounding box is not visible to the bucket. As a result, the renderer will often waste computational resources generating and processing procedural data, only to determine that none of this procedural data is actually visible to the bucket.
Moreover, if procedurally generated data is applied to animated objects, the optimal size of the bounding box may change substantially as the object is manipulated into different positions. Furthermore, many procedures use random data or noise values to make the data appear natural rather than artificial. Because of this, it is difficult or impossible to predetermine and cache an optimal bounding box size for procedures.
It is therefore desirable for a system and method to provide optimally sized bounding boxes to optimize the evaluation of procedurally generated data during rendering. It is further desirable for the system and method efficient determine the optimal size of a bounding box with minimal additional processing overhead. It is also desirable for the system and method to be easily integrated with a wide variety of procedures and types of procedurally-generated data.