Rendering large-scale data sets, such as point cloud data or volumetric data, is a computationally intensive task. Conventionally, the rendering task may be implemented by a sort-last parallel rendering system including a plurality of nodes, where the rendering task is shared among the nodes. The sort-last parallel rendering system is divided into three phases: (1) a partitioning phase; (2) a rendering phase; and (3) a compositing phase. In the partitioning phase, the entire volume of graphics data is subdivided into different sub-volumes. In the rendering phase, each of a plurality of nodes is assigned one of the sub-volumes to render distinct sub-images. In the compositing phase, a compositing node combines the sub-images into a target image for display.
The target image has a particular resolution, typically sized according to the resolution of the display device that will display the image. Scenes that are sampled at the discrete sampling frequency associated with the resolution may include aliasing artifacts caused by high frequency signals in the scene sampled at the lower sampling frequency. For example, Moire patterns are one type of image artifact that can appear due to this type of sampling. One technique for reducing these artifacts is to implement some type of anti-aliasing.
In a simple implementation to reduce aliasing artifacts, areas of the target image associated with large color gradients (i.e., edges) may be blurred to reduce the effect of the aliasing artifacts. This may be performed after the target image has been composited, but is the least effective method for correcting aliasing as it simply blurs the artifacts in the target image. Another implementation to reduce aliasing artifacts is to perform multi sample anti-aliasing (MSAA). In MSAA, during rendering, a color value for a pixel may be calculated based on multiple samples. However, such samples are typically filtered immediately during an intermediate step in the graphics processing pipeline to produce one color value for the pixel for a given primitive. These color values are stored in a frame buffer at a resolution that matches the resolution of the target image. In other words, taking multiple samples when computing a color value at an intermediate step of the graphics processing pipeline has the effect of sampling the scene at a higher frequency, thereby reducing the aliasing artifacts. However, each sample is not fully rendered independently of all the other samples for the pixel. For example, a single depth value may be calculated for the primitive even though multiple texture samples are filtered to generate the single color value. Yet another implementation to reduce aliasing artifacts is super sample anti-aliasing (SSAA). In SSAA an image is fully rendered at a higher resolution and then the image is filtered to produce the target image for display at the lower resolution. In effect, each pixel in the target image is rendered as if multiple samples for the pixel are individual pixels of the higher resolution image.
SSAA requires more processing capacity and memory/network bandwidth than MSAA, but SSAA typically produces the best results. However, some hardware architectures may not have the available processing capacity or bandwidth to fully implement SSAA without some compromise, such as by reducing frame rates. Thus, there is a need for addressing these issues and/or other issues associated with the prior art.