The term “ray tracing” describes a technique for synthesizing photorealistic images by identifying all light paths that connect light sources with cameras and summing up these contributions. The simulation traces rays along the line of sight to determine visibility, and traces rays from the light sources in order to determine illumination.
Ray tracing has become mainstream in motion pictures and other applications. However, current ray tracing techniques suffer from a number of known limitations and weaknesses, including numerical problems, limited capabilities to process dynamic scenes, slow setup of acceleration data structures, and large memory footprints. Thus, current ray tracing techniques lack the capability to deal efficiently with fully animated scenes, such as wind blowing through a forest or a person's hair. Overcoming the limitations of current ray tracing systems would also enable the rendering of, for example, higher quality motion blur in movie productions.
Current attempts to improve the performance of ray tracing systems have fallen short for a number of reasons. For example, current real-time ray tracing systems generally use 3D-trees as their acceleration structure, which are based on axis-aligned binary space partitions. Because the main focus of these systems is on rendering static scenes, they typically fail to address the significant amount of setup time required to construct the required data structures in connection with fully animated scenes. Along these lines, one manufacturer has improved real-time ray tracing by building efficient 3D-trees and developing an algorithm able to shorten the time needed to traverse the tree. However, it can be shown that the expected memory requirement for the system increases quadratically with an increase in the number of objects to be ray-traced.
Another manufacturer has designed a ray tracing integrated circuit that uses bounding volume hierarchies to improve system performance. However, it has been found that the architecture's performance breaks down if too many incoherent secondary rays are traced.
In addition, attempts have made to improve system performance by implementing 3D-tree traversal algorithms using field-programmable gate arrays (FPGAs). The main increase in processing speed in these systems is obtained by tracing bundles of coherent rays and exploiting the capability of FPGAs to perform rapid hardwired computations. The construction of acceleration structures has not yet been implemented in hardware. The FPGA implementations typically use floating point techniques at reduced precision.
Photorealistic image synthesis involves identifying all light paths that connect simulated lights with simulated cameras, i.e., connecting light sources and pixels by light transport paths, and summing up these contributions. Therefore, the simulation traces rays along the line of sight to determine visibility, and traces rays from the light sources in order to determine illumination. Vertices along these transport paths are found by tracing straight rays from one point of interaction to the next one. Beyond this, many other direct simulation methods in scientific computing rely on tracing particles along straight lines. Usually, a considerable part of the total computation time is spent on ray tracing.
The time spent for tracing many rays can be dramatically shortened by constructing an auxiliary acceleration data structure that allows for the efficient exclusion of large portions of the scene to be intersected with the rays instead of intersecting each ray with all objects in a scene.
The efficiency of ray tracing techniques depends heavily on how the search structures are built. Aside from various existing heuristics, memory management is substantially always an issue. While hierarchically partitioning the list of objects allows one to predict the memory footprint, these techniques based on bounding volumes can suffer from inefficiencies caused by large objects and the convexity of the applied bounding volumes.
Accordingly, it would be desirable to provide methods, systems, devices and computer program products that enable the efficient prediction of such multiplicity and of the memory footprint, and the reduction of the memory footprint, all while enabling rapid processing and maximizing frame rates.