As is known in the art, volume rendering generates two-dimensional images from three-dimensional data volumes. Magnetic resonance imaging (MRI), computed tomography (CT) and ultrasound scanning use volume rendering for three-dimensional imaging. Data representing a volume, such as data representing a plurality of two-dimensional planes spaced within the volume or as data representing a plurality of different lines spaced within a volume, is obtained. The 3D representation is rendered from this data. Direct volumetric rendering, as distinguished from surface-based rendering methods, gives insight into the interior of bodies. Therefore, it has grown important for the visualization of scientific simulations and especially in the field of medicine.
Utilization of the GPU for direct volume has gained tremendous popularity in today's computer architecture. The GPU is built to accelerate the rendering of three-dimensional scenes, and its design is tailored to this task. The scenes that are rendered are described by geometry data—vertices that are connected to build lines, triangles and more complex shapes—and texture data—images that are mapped onto the geometry. The graphics pipeline consists of three main stages, in which the incoming data is processed, the vertex stage, the geometry stage and the fragment stage. GPUs support programmability in all the three stages. The geometry data (like vertex position, texture coordinates etc.) is transformed according to the currently active vertex shader. In the geometry stage, the geometric primitives are generated from the provided vertices. In the fragment stage, fragment shaders are executed for each fragment, with the interpolated vertex attributes as input. This is the stage that is mostly exploited for general purpose computations on the GPU (GPGPU).
Numerous algorithms for visualizing volumes have been published (see Engel K., Hadwiger M., Kniss J. M., Lefohn A., Rezk-Salama C., Weiskopf D.: Real-Time Volume Graphics. A K Peters, 1998). Ray-casting is an image-order approach that traces viewing rays through the volume and samples it at discrete positions. Texture slicing is an object-based approach. The volume is sampled by slices and the result is the projected and composited. Splatting is another object-order technique that works by projecting voxels onto the image plane.
One of the challenges is the ever increasing amount of data that is to be visualized, due to advances in image acquisition technology or simulation methodologies. Since the amount of data easily surpasses the amount of video memory available on current GPUs, brick-based volume rendering methods are combined with an efficient memory management that swaps in the data that is needed from the main memory (or even from hard disk) to the video card memory. Bricking is an important technique in texture-based volume rendering. In principle, bricking partitions a volumetric dataset into multiple sub-volumes, each called a brick.
The purpose of brick-based volume rendering or bricking mainly falls to two. 1) Bricks containing only invisible voxels are skipped to gain acceleration; 2) Volumes larger than graphics memory can be rendered by downloading a subset of all the bricks to graphics memory for each rendering pass. With multiple rendering passes, all the visible voxels are processed. Within each brick, voxels can be further subdivided into blocks, and invisible blocks are skipped. Therefore, the minimal granularity of rendering is block, while the minimal granularity of texture downloading is brick. Obviously, a block has to be completely enclosed in the corresponding brick. See Wei Li, “Invisible Space Skipping with Adaptive Granularity for Texture-based Volume Rendering”, Published U.S. Patent Application Pub. No. 2006/0164410 published Jul. 27, 2006 and Published U.S. Patent Application No. 2007/0247459 published Oct. 25, 2007 both assigned to the same assignee as the present invention, the entire subject matter of both being incorporated herein by reference, for more details about bricking.
As is also known in the art, it is frequently desired to combine or fuse different images. For example, modern medical technology provides different modalities for acquiring 3D data, such as computed tomography (“CT”), magnetic resonance imaging (“MRI”), positron emission tomography (“PET”), and ultrasound. The information obtained from different modalities is usually complementary; for example, CT provides structural information while PET provides functional information. In another example, it may be desired to fuse one CT scan of an patient taken at one time with another CT image of the patient at a later time to study the progression of a tumor being observed within the patient. Thus, it is generally desirable to fuse multiple volumetric datasets.
Most existing fusion renderers require that all volumes be aligned and have the same resolution, thereby necessitating that all volumes be re-sampled except the one that is treated as the reference volume. The reference volume generally refers to the volume with the finest resolution to avoid losing information—the other volumes are re-sampled according to the grid of the reference volume. The reference volume may need to be expanded to fill the bounding box enclosing all volumes. The aggregate bounding box of the ensemble of volumes can be significantly larger than individual bounding boxes when the orientation of a volume happens to lie near the diagonal of another volume. The number of voxels after re-sampling is proportional to the volume of the aggregate bounding box. Therefore, re-sampling can significantly increase the processing time (both initially and for each rendering) as well as the amount of memory required.
The volumes usually need to be registered because different scanners can have different coordinate systems (in terms of origins and orientations). During registration all volumes, except the reference volume, are referred to as floating volumes. Various transformations, such as rotation, translation, scaling, and shearing, are applied to the floating volumes so that their features match those in the reference volume. Furthermore, re-sampling must be performed again after such a transformation. Registration typically requires user interaction, with visual feedback, that is repeatedly applied to refine the registration. The resample-based fusion render cannot response quickly enough for such requirements.
Bricking is an important technique in texture-based volume rendering. In principle, bricking partitions a volumetric dataset into multiple sub-volumes, each called a brick. The purpose of bricking mainly falls to two. 1) Bricks containing only invisible voxels are skipped to gain acceleration; 2) Volumes larger than graphics memory can be rendered by downloading a subset of all the bricks to graphics memory for each rendering pass. With multiple rendering passes, all the visible voxels are processed. Within each brick, voxels can be further subdivided into blocks, and invisible blocks are skipped. Therefore, the minimal granularity of rendering is block, while the minimal granularity of texture downloading is brick. Obviously, a block has to be completely enclosed in the corresponding brick. See Wei Li, “Invisible Space Skipping with Adaptive Granularity for Texture-based Volume Rendering”, Published U.S. Patent Application Pub. No. 2006/0164410 published Jul. 27, 2006 and Published U.S. Patent Application No. 2007/0247459 published Oct. 25, 2007 both assigned to the same assignee as the present invention, the entire subject matter of both being incorporated herein by reference, for more details about bricking.
A fusion renderer handles multiple volumes that usually overlap in 3D space. As discussed in co-pending U.S. patent application Ser. No. 11/235,410 filed Sep. 26, 2005, assigned to the same assignee as the present invention, the subject matter thereof being incorporated herein by reference. The basic idea described in U.S. patent application Ser. No. 11/235,410 is to render a whole slice through all blocks, hence performing the rendering in slice-by-slice order, instead of block-by-block. Briefly, the method includes the steps of: (a) building a hierarchical structure for each of a plurality of volumes; (b) finding all blocks in each of the hierarchical structures that intersect a slicing plane; (c) dividing each of the plurality of volumes into stacks of parallel slices and sorting the parallel slices by visibility order; (d) choosing a next slice in the sorted parallel slices, the next slice belonging to a current volume; (e) changing rendering parameters if the current volume is different from a previous volume in a previous iteration of step (d); (f) rendering, based on the rendering parameters, the next slice by intersecting the slicing plane with the blocks corresponding to the current volume; and (g) repeating steps (d)-(f) until all of the sorted parallel slices are rendered. It is desirable to keep each volume independent, rather than resampling them to the same resolution. Each volume maintains its own space-skipping structure. This has the advantage in rendering speed, memory consumption, and flexibility.
When a dataset is partitioned into bricks, not all voxels are accessible during a rendering pass. Therefore, an algorithm combining bricking and fusion have to guarantee that those inaccessible voxels are not rendered for the current pass, and any visible voxels are rendered exactly once. It is a challenging task, especially when invisible bricks and blocks are skipped. For simplicity, the previous co-pending U.S. patent application Ser. No. 11/235,410 only handles a simplified bricking scheme. That is, a volume is only bricked along the Z axis. In other words, each brick has at most two neighbors.
Bricking only in Z results in slab-shaped bricks, which unfortunately is less efficient in performance. Besides due to graphics hardware limitation, dataset whose X and Y sizes exceeding certain number, currently it is 512, cannot be rendered. Graphics memory is a precious resource in a system.
Thus, the most intuitive approach to rendering volumes that do not fit into video memory is bricking. As noted above, in this method, the 3D volume is subdivided into rectangular regions called bricks. These bricks are then rendered in a divide-and-conquer approach, using slicing or ray casting. The size of the bricks is chosen in order to achieve the best performance. Loading only those bricks that are actually needed with a given transfer function reduces the amount of memory transfer. In that respect small bricks are better than large bricks. On the other hand, larger bricks reduce the amount of overhead for managing these bricks and transferring the bricks into GPU memory. Another disadvantage of small bricks appears when using duplicated voxels at brick boundaries to avoid sampling artifacts. The smaller the brick is, the larger the duplicated space is in proportion to the brick size.
Therefore a two-level hierarchy is advantageous [see Engel K., Hadwiger M., Kniss J. M., Lefohn A., Rezk-Salama C., Weiskopf D.: Real-Time Volume Graphics. A K Peters, 1998]. A coarse subdivision of the volume into bricks allows out-of-core rendering, while not wasting too much video memory, and a finer-grained subdivision into blocks allows empty-space skipping at a smaller level. For each of these subdivisions, the minimum and maximum values of the voxel contained are saved (min-max information for empty-space skipping). Regions whose interval of values is mapped to a completely transparent value (with respect to a given transfer function), are not rendered at all. Empty texture bricks are not even loaded, while empty blocks are skipped. A common method for partitioning a volume into blocks is the Binary Space Partitioning (BSP) method, which works by recursively subdividing the volume in two halves to construct a hierarchical tree data structure, as shown in FIG. 8. Some of the nodes are not subdivided any further, if they are already too small. That happens because the volume does not necessarily have power-of-two dimensions. Leaves are the nodes that are not subdivided any further, and are called the nodes that subdivide a node, children of that subdivided node. This is clarified in FIG. 8 where Node 0 is the parent of node 1 and 2; Nodes 1 and 2 are children of node 0; and Node 3 does not have any children but node 1 as parent. Since node 3 does not have any children it is called a leaf node or leaf. Leaf nodes are indicated by circles.
As an added benefit, the BSP-tree also allows the visibility sorting that is necessary for correctly compositing the visualization of the blocks.
On current graphics hardware, an efficient method to perform a deformation based on a vector field is texture space deformation using fragment shaders. Whenever the volume texture is sampled, an associated deformation texture is sampled as well. This texture does not have to be of the same size. The deformation texture stores a three-dimensional vector in its RGB channels. The original texture coordinate plus the offset vector determine a new sampling position. The volume texture is then sampled at this point.
At the same time, deforming volumes is important for applications like registration or modeling. This affects the spatial structure of the data, so that samples from different bricks, not only from the currently active brick, must be accessed. This leads to problems, when a sample that is moved by the deformation field needs to be sampled from a different brick than the active one.
Combining the bricked large volume with deformation leads to several problems:
1. Deformation fields possibly destroy or at least reduce the locality of data and do not adhere to brick boundaries. Using brick-based volume rendering algorithms, the necessary bricks for the rendering are loaded into GPU memory as needed. This approach does not work with deformable volume rendering, since the deformation does not stop at brick boundaries, so for the rendering of one deformed brick, neighboring bricks need to be accessed as well.
2. Preprocessing the volume does not work when the deformation field is dynamic.
3. The deformation destroys the information that is used for empty-space skipping, like the maximum and minimum values within a brick.
Papers like Fang S., Srinivasan R., Huang S., Raghavan R.: Deformable Volume Rendering by 3D Texture Mapping and Octree Encoding, IEEE Visualization '96, pages 73-80, 1996 only address the deformation part, but do not deal with the challenges of deforming large volumes that need to be displayed interactively. In Schulze F., Bühler K., Hadwiger M.: Interactive Deformation and Visualization of Large Volume Datasets, GRAPP 2007, pages 39-46, 2007, the researchers focus on a physically-based deformation, but do not deal with volumes that do not fit into graphics memory.
Based on the existing prior art, it is evident that there is the need for a new inventive method that can solve the issues mentioned above and allow efficient, interactive, and dynamic deformation even for large volumes.