Visualization of volumetric objects which are represented by three dimensional scalar fields is one of the most complete, realistic, and accurate ways to represent internal and external structures of real 3-D (three dimensional) objects.
As an example, Computer Tomography (CT) digitizes images of real 3-D objects and represents them as a discrete 3-D scalar field representation. MRI (Magnetic Resonant Imaging) is another system to scan and depict internal structure of real 3-D objects.
As another example, the oil industry uses seismic imaging techniques to generate a 3-D image volume of a 3-D region in the earth. Some important geological structures, such as faults or salt domes, may be embedded within the region and are not necessarily on the surface of the region.
Direct volume rendering is a computer enabled technique developed for visualizing the interior of a solid region represented by such a 3-D image volume on a 2-D image plane, e.g., displayed on a computer monitor, Hence a typical 3-D dataset is a group of 2-D image “slices” of a real object generated by the CT or MRI machine or seismic imaging, Typically the scalar attribute or voxel (volume element) at any point within the image volume is associated with a plurality of classification properties, such as color—red, green, blue—and opacity, which can be defined by a set of lookup tables. A plurality of rays is cast from the 2-D image plane into the volume and they are attenuated or reflected by the volume. The amount of attenuated or reflected ray energy of each ray is indicative of the 3-D characteristics of the objects embedded within the image volume, e. g., their shapes and orientations, and further determines a pixel value on the 2-D image plane in accordance with the opacity and color mapping of the volume along the corresponding ray path. The pixel values associated with the plurality of ray origins on the 2-D image plane form an image that can be rendered by computer software on a computer monitor. A more detailed description of direct volume rendering is described in “Computer Graphics Principles and Practices” by Foley, Van Dam, Feiner and Hughes, 2nd Edition, Addison-Wesley Publishing Company (1996), pp 1134-1139.
In the CT example discussed above, even though a doctor using MRI equipment and conventional methods can arbitrarily generate 2-D image slices/cuts of, e.g., a heart by intercepting the image volume in any direction, no single image slice is able to visualize the whole surface of the heart. In contrast, a 2-D image generated through direct volume rendering of the CT image volume can easily reveal on a computer monitor the 3-D characteristics of the heart, which is very important in many types of cardiovascular disease diagnosis. Similarly in the field of oil exploration, direct volume rendering of 3-D seismic data has proved to be a powerful tool that can help petroleum engineers to determine more accurately the 3-D characteristics of geological structures embedded in a region that are potential oil reservoirs and to increase oil production significantly.
One of the most common and basic structures used to control volume rendering is the transfer function. In the context of volume rendering, a transfer function defines the classification/translation of the original pixels of volumetric data (voxels) to its representation on the computer monitor screen, particularly the commonly used transfer function representation which is color—red, green, blue—and opacity classification (color and opacity), Hence each voxel has a color and opacity value defined using a transfer function. The transfer function itself is mathematically, e.g., a simple ramp, a piecewise linear function or a lookup table. Computer enabled volume rendering as described here may use conventional volume ray tracing, volume ray casting, splatting, shear warping, or texture mapping. More generally, transfer functions in this context assign renderable (by volume rendering) optical properties to the numerical values (voxels) of the data-set. The opacity function determines the contribution of each voxel to the final (rendered) image.
There are two typical methods/orders to apply classification information (to apply transfer functions):                Classification-Interpolation (CI)—First apply classification information (e.g., red, green, blue, opacity) to the data grids (pixels/voxels) and after that perform interpolation of these x4 values to get a sample. This order is called Classification-Interpolation (CI). Since the interpolation has performed upon already classificated data, the “sampling theorem” ensures that two samples per voxel/cell are enough, therefore, in case of CI, the rendering quality does not get higher for increasingly higher sampling density along a ray.        Interpotation-Classification (IC)—First interpolate the original data and after that apply the classification information to the result of interpolation. This order is called interpolation-Classification (IC). IC provides an increasingly higher quality volume rendering for higher sampling density.        
One of the most common procedures in volume rendering visualization is to visualize the result of segmentation wherein the same data values should be assigned to the different transfer functions; for example: left and right kidney values may be linked/segmented to a different transfer functions to make them appear differently when displayed. Therefore, to visualize the result of segmentation, the group of pixels of data should be linked/associated/segmented to the different transfer functions.
There are generally two major issues to apply multiple transfer functions to volume rendering:
A min/max octree, traditionally used to speed up volume rendering, generally needs a mechanism to deal with different classifications represented in each sub-volume since multiple transfer functions offer a multiple interpretations of sub-volume contribution (min/max can be translated differently by different transfer functions).
The CI order solves naturally the multiple classifications since classification is applied before interpolation and the CI order is traditionally used to visualize multiple classifications, however CI generally provides lower rendering quality than IC.
Therefore, it is desired to addressed these two issues for applying multiple transfer functions to volume rendering