Computer graphics is used in a wide variety of applications, such as in business, science, animation, simulation, computer-aided design, process control, electronic publication, gaming, medical diagnosis, etc. In an effort to portray a more realistic real-world representation, three dimensional objects are transformed into models having the illusion of depth for display onto a two-dimensional computer screen. This is accomplished by using a number of polygons to represent a three-dimensional object. Next, a scan conversion process is used to determine which pixels of a computer display fall within each of the specified polygons. Thereupon, texture is selectively applied to those pixels residing within specified polygons. In addition, hidden or obscured surfaces, which are normally not visible, are eliminated from view. Finally, lighting, shading, shadowing, translucency, and blending effects are applied.
For a high resolution display (1024.times.1024) having over a million pixels, for which values must be generated for each and every pixel, displaying a three-dimensional scene on a computer system is a rather complicated task and requires a tremendous amount of processing power. Furthermore, the computer system must be extremely fast for it to handle dynamic computer graphics for displaying three-dimensional objects that are in motion. Indeed, even more processing power is required for interactive computer graphics, whereby 3-D images change in response to a user input (e.g., flight simulation). And as a scene becomes "richer" by adding more details and objects, more processing computation is required to render that scene. It is an extremely complex task for a computer to render millions of pixels in order to render these amazingly complex scenes.
In light of the enormous difficulties associated with creating and displaying computer generated images, there have been efforts to develop high level programming languages, such as Virtual Reality Meta Language (VRML), in an effort to greatly simplify this task. VRML files are formatted in such a way so that resident graphics engines can use the same basic building blocks stored in a library to construct realistic 3-D images, analogous to snapping together Lego blocks. A graphics application programming interface (API) is then used to take advantage of the powerful and extensive feature sets of high level programming languages. Basically, an API is comprised of a library of commands that allows a programmer to best utilize the graphics hardware in a computer. In designing the API, proper and careful attention must be directed in the selection of which features and attributes are to be included (e.g., geometric morphing, view culling, levels of detail, 3-D audio, texture mapping, modeling, transformation, color, NURBS, fog, alpha blending, smooth shading, motion blur, etc.). In particular, the definition of a scene graph containing geometry, sound, and a transformation hierarchy dramatically impacts how efficiently an object can be rendered for display.
Unfortunately, the ways in which a human would intuitively organize a scene graph is oftentimes not the most efficient way for a computer system to render that scene. For example, a human might organize a car according to functionality. The car consists of a body, engine, wheels, etc. In turn, the engine consists of an engine block, pistons, a carburetor, etc. However, for rendering purposes, it might be more efficient to render the car according to spatial criteria. For example, it might be faster to render the front of the car, then the middle of the car, and finally, the backend of the car. Alternatively, it might be faster to render an object according to its graphics state or node changes. For example, instead of rendering a blue car hood, a black front wheel, a blue car door, a black rear wheel, and then a blue trunk, it is faster for rendering purposes, to render all of the black tires at the same time and then render all the blue body parts at one time. In this manner, there are only two color changes as opposed to having to switch the color four different times. There are also other factors to be considered when organizing a scene graph for optimal rendering. These factors include the object's granularity, level of detail, culling, picking and highlighting, and tessellation. Hence, there exists a dilemma in choosing how a scene graph is to be organized. On the one hand, the scene graph should be organized according to a humanistic framework for the benefit of a human user. On the other hand, the scene graph should be organized so that it can be rendered faster and more efficiently by a computer system.
The present invention offers a solution to this dilemma by maintaining two or more distinct representations of the same scene graph. One representation is organized so that it is intuitive to a human user. The other representation is organized so as to optimize rendering performance. In the present invention, the different representations are inter-related such that when the user makes a change in the user representation, the change is automatically and transparently carried over and reflected in the other representation(s). When the computer actually goes to render the scene graph, it selects and uses the representation that has been specially optimized for rendering purposes. Thereby, the present invention allows the best of both cases in that it is now possible to have fast, efficient rendering without sacrificing ease of human interface.