A large model is loosely defined as one that does not fit in main memory. Interactive rendering of large models has applications in many areas (e.g., computer-aided design (CAD), engineering, entertainment, training), and therefore has been the focus of research. There are several different types of large models. Sometimes the model can actually be quite simple, but it is given in a highly over-tessellated representation for the given view. That is often the case for scanned objects, such as the famous Stanford Bunny. After proper simplification, such models can usually be rendered with simple visibility algorithms, such as view-frustum culling. Descriptions of examples include L. S. Avila & W. Schroeder, Interactive Visualization of Aircraft and Power Generation Engines, IEEE Visualization '97 at 483-486 (1997); J. El-Sana, & Y.-J. Chiang, External Memory View-Dependent Simplification, 19 Computer Graphics Forum 3 (August 2000); and P. Lindstrom et al., A Memory Insensitive Technique for Large Model Simplification, IEEE Visualization 2001 at 121-126. (2001). Another important class of large data comes from terrain models, for which an impressive amount of literature is available, such as P. Lindstrom et al., Real-Time, Continuous Level of Detail Rendering of Height Fields, Proceedings of SIGGRAPH 96, ACM SIGGRAPH 109-118 (1996); M. A. Duchaineau, et al., ROAMing Terrain: Real-Time Optimally Adapting Meshes, IEEE Visualization '97, EEE 81-88 (1997); and P. Lindstrom et al., Visualization of Large Terrains Made Easy, IEEE Visualization 2001 at 363-370 (2001). The present application considers the development of techniques for handling large models with high depth complexity, which are not highly over tessellated with respect to normal viewing conditions. For instance, there are several computer models of real-world environments that do not contain significant amounts of over-tessellated geometry (e.g., those used in CAD or computer games). Such models tend to reduce the benefits of level-of-detail techniques, as noted in D. Aliaga et al., MMR: An Interactive Massive Model Rendering System Using Geometric and Image-Based Acceleration, ACM Symposium on Interactive 3D Graphics 199-206 (April 1999). It is therefore important to use more complex visibility culling techniques to avoid the overdrawing of pixels.
Researchers have been interested in rendering large and complex models since the early days of computer graphics. In fact, many of the acceleration techniques we use today were proposed in J. H. Clark, Hierarchical Geometric Models for Visible Surface Algorithms, 19 Communications of the ACM 547-554 (October 1976), including the usage of hierarchical spatial data structures, level-of-detail (LOD) management, hierarchical view frustum and occlusion culling, and working-set management (geometry caching). The idea of exploiting multiprocessor graphics workstations to overlap visibility computations with rendering was first presented in B. J. Garlick, D. R. Baum, & J. M. Winget, Interactive Viewing of Large Geometric Databases Using Multiprocessor Graphics Workstations, SIGGRAPH Course: Parallel Algorithms and Architectures for 3D Image Generation, ACM SIGGRAPH, 239-245 (1990). The system described in J. M. Airey, J. H. Rohif, & J. Frederick P. B., Towards Image Realism with Interactive Update Rates in Complex Virtual Building Environments, 1990 Symposium on Interactive 3D Graphics 24, 241-50 (March 1990) combined LOD management with the idea of precomputing visibility information. The system used point sampling at preprocessing time to approximate from-region visibility computations. Their system, however, assumed the model was composed of axis-aligned polygons.
To the inventors' knowledge, T. A. Funkhouser, C. H. Séquin, & S. J. Teller, Management of Large Amounts of Data in Interactive Building Walkthroughs, 1992 Symposium on Interactive 3D Graphics 25, 2, 11-20 (March 1992) describes the first published system to support models larger than main memory and perform speculative prefetching. That system is based on the from-region visibility algorithm described in S. J. Teller & C. H. Sequin, Visibility Preprocessing for Interactive Walkthroughs, Computer Graphics (Proceedings of SIGGRAPH 91) 25, 4 61-69 (July 1991). Improvements to the original system are proposed in T. A. Funkhouser, & C. H. Séquin, Adaptive Display Algorithm for Interactive Frame Rates During Visualization of Complex Virtual Environments, Proceedings of SIGGRAPH 93 at 247-254 (August 1993) and in T. A. Funkhouser, Database Management for Interactive Display of Large Architectural Models, Graphics Interface '96 at 1-8 (May 1996), but their preprocessing stage remained limited to models made of axis-aligned cells.
The Massive Model Rendering (MMR) system described in D. Aliaga et al., MMR: An Interactive Massive Model Rendering System Using Geometric and Image-Based Acceleration, 1999 ACM Symposium on Interactive 3D Graphics 199-206 (April 1999) introduced the idea of replacing geometry that is far from the user's point of view with textured depth meshes (TDMs). TDMs are image impostors that contain depth information, and are displayed using projective texture mapping. Their system employed an impressive number of acceleration techniques. They note, however, that some of those acceleration techniques may compete with each other. For example, occlusion culling techniques are most effective when the scene has high depth complexity, but replacing geometry with imagery reduces the depth complexity. The inventors believe that that system was the first to handle models with tens of millions of polygons at interactive frame rates. The major disadvantages of that system were the preprocessing times (which were on the order of weeks), the manual user intervention required, and the large SGI multi-processor machines with several gigabytes of main memory. In 2001, the UNC Walkthrough Group made their massive power plant model (The Walkthru Project at UNC Chapel Hill 2001, http://www.cs.unc.edu/geom/Powerplant/) available to the graphics community. As pointed out in Clark, supra, good models “are at least as valuable as the visible surface algorithms that render them.”
I. Wald, P. Slusallek, & C. Benthin, Interactive Distributed Ray Tracing of Highly Complex Models, Rendering Techniques 2001 at 277-288 (2001) discloses a system able to generate ray-traced images of large models at interactive frame rates. That system is able to preprocess the UNC power plant model in 2.5 hours, which is two orders of magnitude faster than Aliaga et al., supra. The paper further suggests that the ray tracing system could benefit from using prefetching, because it would probably hide more network latency. Most of the above-described systems use from-region visibility algorithms. The exception is the system described by Wald, which uses ray tracing. That system, however, requires a relatively large number of I/O operations, is too slow for certain applications requiring high frame rates and requires expensive hardware.
Other work in this area is reported in L. S. Avila & W. Schroeder, Interactive Visualization of Aircraft and Power Generation Engines, IEEE Visualization '97 at 483-486 (1997), in J. El-Sana, & Y.-J. Chiang, External Memory View-Dependent Simplification, 19 Computer Graphics Forum 3 (August 2000), and in B.-O. Schneider et al., Brush As a Walkthrough System for Architectural Models, Proc. 5th Eurographics Workshop on Rendering 389-399 (1995). Those systems do not use occlusion culling, which makes them somewhat unsuitable for rendering high depth complexity scenes.
Recently, substantial research has been conducted in the area of out-of-core graphics and visualization. Those efforts include F. Bernardini et al., The Ball-Pivoting Algorithm for Surface Reconstruction, 5 IEEE Transactions on Visualization and Computer Graphics 349-359 (October-December 1999); M. Pharr et al., Rendering Complex Scenes with Memory-Coherent Ray Tracing, Proceedings of SIGGRAPH 97 at 101-108 (August 1997); Y.-J. Chiang et al., I/O Optimal Isosurface Extraction, IEEE Visualization '97 at 293-300 (November 1997); Y.-J. Chiang et al., Interactive Out-of-Core Isosurface Extraction, IEEE Visualization '98 at 167-174 (October 1998); M. Cox et al., Application-Controlled Demand Paging for Out-of-Core Visualization, IEEE Visualization '97 at 235-244 (November 1997); S.-K. Ueng et al., Out-of-Core Streamline Visualization on Large Unstructured Meshes, 3 IEEE Transactions on Visualization and Computer Graphics at 370-380 (October-December 1997); H.-W. Shen et al., A Fast Volume Rendering Algorithm for Time-Varying Fields Using a Time-Space Partitioning (TSP) Tree, IEEE Visualization '99 at 371-378 (October 1999). Those techniques have been developed to cope with models that are too large to fit in main memory. The real-time rendering of large polygonal models, however, has not been addressed by those works.
There is presently a need for a method that renders large, high depth complexity scenes at a frame rate and image quality suitable for walk-through simulation. The method should require reasonable preprocessing time and should run using low-cost, commodity hardware. To the inventors' knowledge, there is currently no method available to fill that need.