The present invention relates in general to computer graphics, and in particular to the use of the symmetrical characteristics of a set of vertices in rendering.
Many computer generated images are created by mathematically modeling the interaction of light with a three-dimensional (3D) scene from a given viewpoint and projecting the result onto a two-dimensional (2D) “screen.” This process, called rendering, generates a 2D image of the scene from the given viewpoint and is analogous to taking a digital photograph of a real-world scene.
As the demand for computer graphics, and in particular for real-time computer graphics, has increased, computer systems with graphics processing subsystems adapted to accelerate the rendering process have become widespread. In these systems, the rendering process is often divided between a computer's general-purpose central processing unit (CPU) and a graphics processing subsystem. Typically, the CPU performs high-level operations, such as determining the position, motion, and collision of objects in a given scene. From these high-level operations, the CPU generates a set of rendering commands and data defining the desired rendered image (or images). Rendering commands and data can define scene geometry by reference to groups of vertices. Groups of points, lines, triangles and/or other simple polygons defined by the vertices may be referred to as “primitives.” Each vertex may have attributes such as color, world space coordinates, texture-map coordinates, and the like. Rendering commands and data can also define other parameters for a scene, such as lighting, shading, textures, motion, and/or camera position. From the set of rendering commands and data, the graphics processing subsystem creates one or more rendered images. In a given image to be rendered, groups of vertices may be symmetrical to one another. Similarly, primitives (or subparts thereof) may be symmetrical to one another, as well.
Graphics processing subsystems typically use a stream, or pipeline, processing model, in which input elements are read and operated on successively by a chain of processing units. The output of one processing unit is the input to the next processing unit in the chain. A typical pipeline includes a number of processing units, which generate attribute values for the 2D or 3D vertices; create parameterized attribute equations for points in each primitive, and determine which particular pixels or sub-pixels are covered by a given primitive. Typically, data flows one way, “downstream,” through the chain of units, although some processing units may be operable in a “multi-pass” mode, in which data that has already been processed by a given processing unit can be returned to that unit for additional processing.
The data sent to the graphics processing subsystem typically defines a set of vertices to be used in rendering the final image. However, the speed at which the entire set of vertices can be rendered through the pipeline may be limited by the available bandwidth to the GPU. Many computer graphics applications require complex, detailed models. As rendered scenes become more complex, they typically include a larger number of vertices. Due to the complexity of managing all vertices in a scene, more vertex data than is strictly necessary is typically sent through the pipeline. Processing bottlenecks can occur, for instance, if the system design does not provide sufficient bandwidth to communicate all of the vertices through various stages of the pipeline.
It is therefore desirable to send less vertex data through select parts of graphics pipeline, in order to decrease wasteful rendering operations, reduce the bandwidth requirements for communicating vertices and associated attributes, and improve rendering performance.