Conventional graphics packages provide resources for rendering a two-dimensional view of a three dimensional scene. These graphics packages generally include functions for generating respective picture components within the scene. These functions may be used by programs written in high-level programming languages. An example of such graphics package is the OpenGL standard developed by Silicon Graphics, Inc. OpenGL includes functions for generating picture components such as straight lines, polygons, circles, and the like.
FIG. 1 is a flowchart that depicts the steps that are performed in a conventional graphics package to render a two-dimensional view of a three dimensional scene. Initially, objects that are resident within the three dimensional scene must be modeled (step 102 in FIG. 1). In general, models of the objects that inhabit the scene are constructed out of simpler surfaces ("primitives") that are combined to make more complex objects. Examples of such primitives include polygons and triangles. FIG. 2 shows an example of how a five-pointed star A may be formed from triangles B, C, D, E, and F and a pentagon G. The vertices within the star and the primitives are numbered 1-10. For more sophisticated objects, triangular meshes and other polygon representations may be utilized to model the objects. The models of the objects are generally specified in Cartesian coordinates as part of a modeling coordinate system.
The next step in developing a two-dimensional view of a three dimensional scene is to position the modeled objects in the three dimensional space that constitutes the scene (step 104 in FIG. 1). The scene has its own coordinate system, known as a world coordinate system. Transformations are applied to the models of the objects to position the objects within the world coordinate system.
In order to obtain a perspective view of the three dimensional scene, a frustum is defined (step 106 in FIG. 1). FIG. 3 depicts an example of a frustum 300. The frustum 300 constitutes the truncated pyramid formed by the volume between a near clipping plane 304 and a far clipping plane 302. The frustum 300 is used to determine the view 308 that will be obtained for the three dimensional scene, where the view is a perspective view. Reference point 310 constitutes the apex of a viewing pyramid that includes the frustum 300. The apex of the viewing pyramid constitutes a reference point where a viewer's eye is positioned in rendering the viewscreen 308. This viewing volume 306 holds the objects that may potentially be visible on the viewscreen 308. Defining the frustum 300, in step 106, entails defining the far clipping plane 302, the near clipping plane 304, and the positioning of the viewscreen 308 relative to the apex 310 and the near clipping plane 304. Objects within the viewing volume 306 are projected onto the viewscreen 308 to create the view. In particular, a perspective transformation is applied to project the objects in the viewing volume 306 onto the viewscreen 308 so that more distant objects appear smaller than closer objects. The view is then rendered on a display device (step 106 in FIG. 1). Transforms are applied to convert the view into device coordinates that are used by an output device.
Such rendering of two-dimensional views of three dimensional scenes has been widely used in virtual reality applications and desktop 3D. Many virtual reality systems provide a three dimensional space in which a user may be immersed. Typically, a user may navigate throughout the immersive three dimensional space. The two-dimensional view of the three dimensional space is updated as the user moves through the three dimensional space.
Unfortunately, such virtual reality systems suffer a number of drawbacks. First, these virtual reality systems suffer from "magical appearance syndrome," wherein a user magically appears within the three dimensional space. There is no initial place to which a user navigates; instead, the user magically appears within the three dimensional space. As a result, it is difficult for a user to become properly oriented within the three dimensional space. Second, it is difficult to navigate within such three dimensional spaces because a user is not given the same information that a user is provided when moving about the real world. In particular, people typically make significant use of peripheral vision and such peripheral vision is absent from desktop based 3D virtual reality systems.