A high performance 3-D graphics system has a frame buffer that includes both a color buffer for storing pixel intensity values for the different pixel addresses and a depth, or Z buffer for storing depth information for the same pixels. Pixel color and depth information arise from a rendering (or rasterization) process that converts small primitives, such as polygons identified by their vertices, into contiguous addressable locations corresponding to illumination positions (pixels) upon a graphics output device, such as a CRT. The overall image or scene is converted into a (typically large) collection of such polygons. The overall collection is generally composed of sub-collections, which may in turn have their own sub-collections. For example, an engineering view of an automobile under design may include collections that describe the body in relation to a frame, wheels, engine and other drive train elements, each having their own collections and sub-collections of polygons. The division into collections of polygons is managed by a graphics application program that executes on a host computer. The relationships between the polygons is defined and described in a data base. The host computer typically uses a graphics accelerator to assist in the processing of the polygons. As each polygon is processed it produces a collection of pixels that fills in the polygons defined by the vertices. To make a processed polygon visible, the resulting pixel values need to stored in the color buffer portion of the frame buffer.
When the user specifies a point of view relative to the object and a viewing volume to contain the object (as well as other things, such as lighting, too numerous to mention here) it becomes possible to process a list of polygons describing the object or scene and write the color values into the color buffer and their corresponding depth values into the Z buffer. However, just as we humans don't have X-ray vision, the color buffer needs to end up with only those pixels values that describe the selected view (projection of the desired slice of the object onto the viewing screen), and not any pixel values for locations within the object that are deeper along the viewing axis. This is where the Z buffer comes into use. It holds a depth value for each pixel value stored in the color buffer. A mode of Z buffer operation may be instituted that compares an incoming new Z value Z.sub.new against an existing already stored Z value Z.sub.old. If the comparison is favorable, then the new pixel values for color and depth are allowed to overwrite the previous ones. In high performance systems this is usually done with hardware for speed reasons, although it can also be done in software.
A more thorough examination of Z buffering may be found in U.S. Pat. No. 4,961,153, entitled GRAPHICS FRAME BUFFER WITH STRIP Z BUFFERING AND PROGRAMMABLE Z BUFFER LOCATION, filed Aug. 18, 1987 and issued on Oct. 2, 1990.
Z buffering is all well and good, but it can consume a lot of time to transform all the polygon vertices and check all the pixels for the entire object or scene. This limits the rate at which new contents for the fame buffer can be computed, which limits the speed of such features as rotation, animation or other operations that are interactive. To speed things up it is now common to include the notion of bounding volumes in the data structure that describes the object. A bounding volume is an easy to render primitive, such as a cube, that is assumed to completely contain some subset of the collections and sub-collections of polygons. (A bounding volume may have a fair number of pixels on its surface, but in has far fewer vertices that need to be transformed, and that produces a significant savings in time.) In the case of the above mentioned automobile being designed, a bounding volume might enclose the entire engine. A different bounding volume could enclose the crankshaft within the engine, and a different one for the oil pump, etc. When it is time to compute new contents for the frame buffer, bounding volumes can be sorted along the viewing axis, from near to far (ambiguities can be resolved in any suitable way--randomly if need be). If the point of view is selected such that the hood of the car is visible from the top, then the engine and none of its internal components will be visible. The bounding volume containing the engine is, say a rectangular solid. If, as a test, we treat it as a primitive in its own right and ascertain that none of the pixel locations on its surfaces would be visible (because they are behind those of the hood), then there is no need to attempt any rendering of the polygons associated with any collections contained with the bounding volume for the engine. This allows the engine to be skipped, and so on. The savings in time can be considerable, and allows the completed image to be ready sooner. The general term for this topic is "occlusion culling" and it is the subject of the above mentioned patent application to Olsen, Scott and Casey that was incorporated herein by reference. Those interested in still more information about this topic may wish to consult a standard computer graphics text, such as Fundamentals of Interactive Computer Graphics, by James D. Foley and Andres Van Dam, published by Addison-Wesley Co. in July 1984 (2nd ed.). See all of Chapter 11 and .sctn.5 of Chapter 15.
As powerful as occlusion culling is, it is not the last performance enhancement possible in such graphics systems. It would be desirable if the technique of occlusion culling could be made more flexible by providing an indication of how much of a bounding volume is for sure not visible, and how much might contain visible polygons. It must be remembered that bounding volumes are chosen for their simple shape (small number of polygons), the better to be easy to render. Accordingly, they generally do not fit tightly over their contents. This means that the simple condition of a portion of a bounding volume being visible does not guarantee that any of its contents will be. All that can be said with certainty is that if none of the bounding volume is visible, then all of its contents can be ignored. But suppose that at most only ten or twenty pixels of a bounding volume will be visible, out of, say, several thousand, or perhaps several tens of thousands. It may be perfectly acceptable to the user to ignore that tiny amount as if it were truly not visible, and take the slight loss of detail as a penalty for increased speed of rendering. The penalty is only temporary, anyway, since when final accurate results are desired, a completely correct rendition can still be obtained. Thus, it would be desirable to allow the user to have such a mode of graphics operation, which we term "degree of visibility testing".