1. Field
This invention relates generally to computer graphics, and more specifically to computer systems and processes for rendering data to produce images.
2. Related Art
In computer graphics, rendering generally includes the process of generating a two-dimensional (2D) image using a model that describes three-dimensional (3D) objects within a scene. Many methods generating images of three-dimensional scene information have been developed. Primary among these are ray-tracing and scanline rendering, and an image renderer may employ a number of rendering methods, including aspects of both of these, to obtain a final image. Both ray-tracing and scanline rendering develop an image made up of rectangular samples called pixels, but each algorithm implements it in a different way, and each algorithm has virtues and drawbacks.
As shown in FIG. 1A, the ray-tracing approach generates an image by tracing the path of rays from a virtual camera through pixels in an image plane and then into the scene. Each ray is tested for intersection with the objects in the scene. If a ray hits a surface, the ray tracing algorithm traces through the reflected and/or refracted rays in addition to rays that sample the scene to capture secondary lighting effects, all of which may hit other surfaces in the scene. By recursively tracing all of these rays, a high degree of photorealism may be obtained. At each ray-intersection a shading operation may be performed in order to evaluate the correct color contribution due to that intersection.
While the ray-tracing algorithm projects rays from the virtual camera through the image plane into the scene against the direction of the light rays, a scanline algorithm projects objects from the scene onto the image plane in the same direction as the light rays as shown in FIG. 1B. The scanline algorithm then scan-converts the projected geometry/object and uses a Z-buffering scheme to keep track of which object is closest at each pixel in the image.
In general, ray-tracing techniques are preferred or even necessary in developing full global illumination solutions, which take into account not only primary visibility (light which comes directly from a visible object), but also secondary rays, such as reflections, refractions, and color bleeding (some even treat shadows as global illumination effects). Images rendered using global illumination algorithms often appear more photorealistic but they are more expensive in terms of computation time and memory. As a practical matter, many opt to determine primary visibility using scan-conversion techniques and then switch to raytracing to evaluate secondary effects.
Since each of ray-tracing and scanline rendering has strengths and weaknesses in different situations, images in modern computer graphics films are often created using hybrid rendering systems that use a scanline algorithm for “primary” visibility determination (determining which objects are directly visible) and a ray-tracing algorithm for “secondary” lighting in which rays are created dynamically to capture the secondary effects, such as soft shadows, ambient occlusion, diffuse effects (such as color bleeding), as well as other more sophisticated secondary lighting effects.
Generally and naïvely, ray-tracing can be implemented with the following pseudo-code:
For each pixel in the image{ Construct a ray from virtual camera through the pixel For each triangle in the scene {  Determine if there is a ray-triangle intersection  If there is an intersection  {   If the intersection is the closest intersection so far (Z-Buffer test)   {    Set the intersection as the closest intersection (Z-Buffer write)    Shade the intersection    Place the shaded color at the pixel location in the image   }  } }}
In the pseudo-code above, a ray is projected through each pixel in the image. Each ray is tested against every small area (e.g., triangles or other shapes) in the scene for intersection, and the closest intersection for each ray is tracked. The disadvantage of this naïve approach is that it requires a huge number of intersection tests for each ray and can be inefficient and costly.
Different acceleration schemes for ray-tracing have been developed to reduce the number of intersection tests for a given ray, including bounding volume acceleration schemes as well as spatial subdivision schemes. In general, a bounding volume scheme encloses an object that is relatively complex with a simple bounding volume, such as a sphere, ellipsoid, or rectangular solid, whose intersection test is less expensive than that of the object itself. If a given ray does not penetrate the bounding volume, then it does not intersect the object contained within the volume, and an expensive ray-object intersection test can be avoided. The spatial subdivision scheme partitions a scene into sub-spaces based upon object locations according to a dynamically determined spatial hierarchy (such as one supplied by organizing objects into a kd-Tree). The number of ray-object intersection tests may be substantially reduced by limiting the tests to the objects that occupy the sub-space the ray is passing through.
Another acceleration scheme includes ray-bundling, which is a technique that processes rays as bundles, e.g., bundling a plurality of rays and testing each ray of the bundle of rays against common scenery objects. For instance, in ray-bundling, in order to guarantee that no intersections are missed, each of the rays within the bundle is checked against the union of all of the triangles that any ray, on its own, may hit. If two rays originate from approximately the same location and they both travel along more or less the same path, the algorithm tends to work fairly well. However, the more the two rays diverge, the greater the possibility that the algorithm will perform many ray-triangle intersection tests unnecessarily. Therefore, ray-bundling typically works well when the rays within the bundle are sufficiently coherent or similar to each other.