The present invention relates generally to computer processing of multi-dimensional data, e.g. in 3D. More particularly the invention relates to a system according to the preamble of claim 1 and a method according to the preamble of claim 11. The invention also relates to a computer program according to claim 27 and a computer readable medium according to claim 28.
The size and complexity of the data amounts that today's computers must handle is often challenging in many ways. For example, the processing demand placed by volumetric data from simulations and medical imaging for interactive viewing can rapidly become immense. In practice, however, this problem may be reduced substantially due to various inherent data properties, as well as user-set parameters. Namely, most voxels in a volume to be viewed may either be rendered completely transparent, or be obscured by voxels representing other image parts. In computer imaging, a so-called transfer function (TF) is normally used to describe which image parts that shall be visible, and to what extent, in a particular visualization or view of the data.
FIG. 1 which shows a general block diagram over a prior-art system for processing three-dimensional image data. The system includes a primary unit 110, a long-term storage means 120, a temporary storage means 130 and a presentation unit 140, such as a graphics processor. The primary unit 110 is adapted to produce source data, for instance as the result of a simulation or measurement process. The primary unit 110 then compresses the data, either losslessly or lossy, and stores the source data in the compressed format in the long-term storage means 120. Thereby, the storage, resources thereof are economized. However, in connection with further processing of the data, for example in visualization, the source data must be decompressed and transferred to the temporary storage means 130. At this stage, the amount of data is often comparable to the original amount of source data that was produced by the primary unit 110. Thus, the presentation unit 140 must typically handle a very large amount of data, and much of the efficiency gained in the compression is lost in the following processing pipeline.
The article Ljung, P. et al., “Transfer Function Based Adaptive Decompression for Volume Rendering of Large Medical Data Sets”, Proceedings IEEE Volume Visualization and Graphics Symposium, pp 25-32, 2004, describes how medical knowledge embedded in the TF can be exploited to reduce the required bandwidth of a direct volume rendering pipeline for producing medical images, for instance based on computer tomography data. Thus, parts of a data volume can be represented at low resolution while retaining an overall high visual quality. A level-of-detail (LOD) scheme here defines which parts of the data set that shall be presented at a specific resolution in a particular visualization of the data. Based on the LOD scheme, a multi-resolution data set represented by means of compressed wavelet transformed blocks can be adaptively decompressed at a maintained high rendering quality while significantly reducing the required amount of data in the rendering pipeline.
Although the above-described approach using multi-resolution data sets and LOD schemes is very resource efficient, certain practical problems remain to be solved. For example, at the block boundaries between data blocks representing different resolution levels artifacts may occur that deteriorate the visual impression. Typically, the resulting images are perceived as blocky. Of course, similar boundary effects (i.e. undesired discontinuities in the target data) may arise also in cases where the data set represents other information than 3D images, such as meteorological data, or map data.