Visualization of volumetric objects that are represented by three dimensional scalar fields is one of the most complete, realistic and accurate ways to represent internal and external structures of real 3-D (three dimensional) objects. As an example, Computer Tomography (CT) digitizes images of real 3-D objects (such as inside the human body) and represents them as a discrete 3-D scalar field representation. MRI (Magnetic Resonant Imaging) is another system to scan and depict internal structures (e.g. the human body) of real 3-D objects.
As another example, the petroleum industry uses seismic imaging techniques to generate a 3-D image volume of a 3-D region in the earth. As in the human body, some important structures, such as geological faults or salt domes, may be embedded within the region and are not necessarily on the exterior surface of the region.
Direct volume rendering is a well-known computer graphics technique for visualizing the interior of a 3-D region represented by such a 3-D image volume on a 2-D image plane, e.g., as displayed on a computer monitor. Hence a typical 3-D dataset is a group of 2-D image “slices” of a real object generated by the CT or MRI machine or seismic imaging. Typically the scalar attribute or voxel (volume element) at any point within the image volume is associated with a plurality of classification properties, such as color (e.g., red, green, and blue) and opacity, which can be defined by a set of lookup tables. During computer rendering, a plurality of “rays” is cast from the 2-D image plane into the volume and the rays are each attenuated or reflected by the volume. The amount of attenuated or reflected ray “energy” of each ray is indicative of the 3-D characteristics of the objects embedded within the image volume, e.g., their shapes and orientations, and further determines a pixel value on the 2-D image plane in accordance with the opacity and color mapping of the volume along the corresponding ray path. The pixel values associated with the plurality of ray origins on the 2-D image plane form an image that can be rendered by computer software on a computer monitor. Computer enabled volume rendering as described here may use conventional volume ray tracing, volume ray casting, splatting, shear warping, or texture mapping. A more detailed description of direct volume rendering is described in “Computer Graphics Principles and Practices” by Foley, Van Darn, Feiner and Hughes, 2nd Edition, Addison-Wesley Publishing Company (1996), pp 1134-1139.
In the CT example discussed above, even though a doctor using MRI equipment and conventional methods can arbitrarily generate 2-D image slices/cuts of an object (e.g., a human heart or knee) by intercepting the image volume in any direction, no single image slice is able to visualize the entire exterior surface of the object. In contrast, a 2-D image generated through direct volume rendering of the CT image volume can easily display on an associated computer monitor the 3-D characteristics of the object (e.g., a heart, which is very important in many types of cardiovascular disease diagnosis).
Similarly in the field of oil exploration, direct volume rendering of 3-D seismic data has proved to be a powerful tool that can help petroleum engineers to determine more accurately the 3-D characteristics of geological structures embedded in a region that are potential oil reservoirs and to increase oil production significantly.
One of the most common and basic structures used to control volume rendering is the transfer function. In the context of volume rendering, a transfer function defines the classification/translation of the original pixels of volumetric data (voxels) to its representation on the computer monitor screen, particularly the commonly used transfer function representation which is color (e.g., red, green, and blue) and opacity classification (often referred to as “color and opacity”). Hence each voxel has a color and opacity value defined using a transfer function. The transfer function itself is mathematically (e.g., a simple ramp) a piecewise linear function or a lookup table. Computer enabled volume rendering as described here may use conventional volume ray tracing, volume ray casting, splatting, shear warping, or texture mapping. More generally, transfer functions in this context assign renderable (by volume rendering) optical properties to the numerical values (voxels) of the data-set. The opacity function determines the contribution of each voxel to the final (rendered) image.
A common need of volume rendering applications is the extraction of traditional computer graphics polygonal objects from volumetric data. A polygon in computer graphics is a 2-D shape, e.g., a mesh represented in 3-D space. A polygonal in computer graphics can be used to represent a 3-D manifold with an infinitely thin surface, thus visualizing only a tiny subset of the actual object being represented. Its position is defined by the XYZ coordinates of its vertices (corners). Volumetric data and polygonal object models representing the volumetric data are different kinds of data in this field; volumetric data is a 3-D array of pixels while the well-known polygonal object model is a list of polygonal objects such as triangles or rectangles, which are each represented by a grouping of correspondent XYZ vertices with assigned colors at each vertex. In volumetric data, the 3-D array of pixels is used to visualize all internal and external structures of the object, rather than the infinitely thin manifold on the outside of the object.
Even though direct volume rendering plays a key role in many important fields, currently available 3-D printing devices expect as input a polygonal object representation of 3-D objects to be printed. Thus, porting the visual information from volume rendered images to polygonal object models is a significant technical problem.