The present invention relates to the field of computer graphics, and in particular to methods and apparatus for optimizing the evaluation of functions associated with surfaces. Many computer graphic images are created by mathematically modeling the interaction of light with a three dimensional scene from a given viewpoint. This process, called rendering, generates a two-dimensional image of the scene from the given viewpoint, and is analogous to taking a photograph of a real-world scene. Animated sequences can be created by rendering a sequence of images of a scene as the scene is gradually changed over time. A great deal of effort has been devoted to making realistic looking rendered images and animations.
Rendering typically divides an image into image sample points, which corresponding with pixels or sub pixel regions of the image. The renderer samples the lighting and shading of objects or geometry of a scene for each image sample point to create an image. Renderers typically sample scenes by projecting rays from image sample points into the scene to intersect scene geometry or by projecting scene geometry on to the image plane and determining intersections between image sample points and projected scene geometry.
Because the scene is sampled at discrete locations with image sample points, aliasing artifacts can arise when rendering small objects. Small objects are objects or scene geometry that are relatively small (e.g. less than one pixel in size) when projected on to the image plane, which can be because the scene geometry itself is relatively small and/or because the scene geometry is far away from the camera viewpoint. Small objects can be any type of scene geometry, such as polygons, micropolygons, particles, curves, patches, or any other computer graphics representation of geometry known in the art.
Temporal aliasing is one problem with rendering small objects. Typical renderers sample the scene geometry at one or more discrete image sample points within the boundaries of each pixel. If an image sample point “hits” scene geometry, the attributes of the scene geometry, such as its color, are used to determine the attribute values of the image sample point. Relatively large objects, which are larger than the spacing between image sample points, will always be hit by at least one image sample point as the object moves relative to the camera viewpoint. However, small objects may be smaller than the typical spacing between image sample points when projected on to the image plane. As a result, small objects tend to sporadically fall between image sample points as they move relative to the camera viewpoint, causing flickering or temporal aliasing.
One prior solution to this problem is to increase the number and density of images samples in a pixel. However, this greatly increases the computational resources and time needed to render an image. Additionally, regardless of the number and density of image sample points, there is no way to completely eliminate temporal aliasing.
It is therefore desirable for a system and method to eliminate temporal aliasing arising from the rendering of small objects. It is further desirable for the system and method to efficiently render large numbers of small objects. It is also desirable for the system and method to efficiently render large numbers of transparent objects.