Monte Carlo (MC) ray tracing is a rendering algorithm for synthesizing photo-realistic images from 3D models. MC rendering algorithm can typically be used to simulate a variety of rendering effects through a unified framework, i.e., ray tracing. The rendering time of MC ray tracing, however in many cases, is unacceptable since it typically requires integrating a huge number of ray samples, e.g., more than 10K ray samples per pixel, to generate a converged rendered image.
In attempting to tackle the aforementioned ray performance problem of MC ray tracing, various adaptive rendering techniques have been developed. Those techniques typically involve locally controlled sampling rate to adaptively adjust the image reconstruction. For example, some of those techniques involve adaptive sampling the heterogeneous noise property of rendered images to guide irregular sampling density rather than a uniform sampling, and locally controlling smoothing by considering MC noise so that high-frequency edges are properly preserved.
Adaptive image rendering has a long history. For example, Kajiya, J. T, in the paper entitled “the rendering equation” [ACM SIGGRAPH '86, 143-150], presented a general idea that high-dimensional MC samples can be allocated adaptively in a hierarchical structure by using the variances of the samples and these samples can be integrated to generate rendered images. Approaches using those techniques typically show high-quality rendering results for a moderate dimension (e.g., 2 to 5), but they typically suffer from the curse-of-dimensionality as the dimension of MC samples can be high (e.g., more than 10) when global illumination is simulated.
Frequency analysis based reconstruction for MC ray tracing has also been actively studied since it can provide high-quality reconstruction results guided by the well-established principle that analyzes light transport in frequency space. Sophisticated anisotropic reconstruction methods using the theory have been proposed for specific rendering effects such as depth-of-field, motion blur, soft shadows, ambient occlusions, distributed effects, and indirect illumination. Recently, the idea of simplifying the frequency based anisotropic filters with a rectangle-shaped filter was presented to design an efficient axis-aligned filter for interactively reconstructing soft shadows, indirect lighting components caused by diffuse (and moderately glossy) bounces, and distribution effects. More recently, an optimization technique has been proposed to achieve an interactive frame rate even while maintaining the shape of sheared filters. Although these methods demonstrated good rendering results even for noisy images, rendered by a small number of samples, they often supported only a subset of rendering effects.
The image-space adaptive methods were developed to addressed the dimensionality issue by analyzing MC errors (e.g., variance) in only 2D image space and denoising the errors through an established image filter, guided by estimated errors. These methods have received attention due to its intrinsic generality and simplicity compared to the high-dimensional adaptive methods. With different image filters, these methods, at a high level, control filtering bandwidths at each pixel to minimize a numerical error. Especially, the recent adaptive methods have proposed optimization algorithms to estimate optimal filtering bandwidths per pixel to minimize the mean squared error (MSE) of reconstruction results. Technically speaking, these attempts can be considered optimization processes that compute an optimal balance between bias and variance, caused by over- and under-blurring, respectively.
The above-described conventional techniques use various reconstruction frameworks. They typically focus on controlling filtering bandwidths locally to increase numerical accuracy. The approximation function used in those techniques is typically fixed (typically as a low-order function).