The present embodiments relate to a method and to a device for influencing a depiction of a three-dimensional object.
The present embodiments lie within the field of volume rendering, the depiction or visualization of three-dimensional bodies or objects. The modeling, reconstruction or visualization of three-dimensional objects has a wide field of application in the area of medicine (e.g., CT, PET, MR, ultrasound), physics (e.g., electron structure of large molecules) and geophysics (e.g., composition and position of the earth's layers). The object to be investigated may be irradiated (e.g., using electromagnetic waves or sound waves) in order to investigate the composition of the object. The scattered radiation is detected, and properties of the body are determined from the detected values. The result conventionally includes a physical variable (e.g., proportion of tissue components, elasticity, speed), the value of which is determined for the body. A virtual grid may be used in this case, the value of the variable being determined at the grid points thereof. The grid points or the values of the variable at these locations may be voxels. The voxels are often in the form of gray scale values.
A three-dimensional depiction of an investigated object or body is produced from the voxels using volume rendering on a two-dimensional display area (e.g., a screen). Pixels, of which the image of the two-dimensional image display is composed, are produced from the voxels (e.g., with the intermediate acts of object points being obtained from the voxels by way of interpolation). Alpha compositing may be carried out in order to visualize three dimensions on a two-dimensional display. With alpha compositing, colors and transparency values (e.g., values for the non-transparency or opacity (the covering power of various layers of a body)) are allocated to voxels or volume points formed from voxels. More specifically, three colors in the form of a three-tuple, which codes the fractions of the colors red, green and blue (e.g., the RGB value), and an alpha value, which parameterizes the non-transparency, are allocated to an object point. Together the three-tuple and the alpha value form a color value RGBA that is combined or mixed with the color values of other object points to form a color value for the pixel (e.g., conventionally using alpha blending for the visualization of partially transparent objects).
An illumination model may be used to allocate a suitable color value. The illumination model takes account of light effects (e.g., reflections of light on the outer surface or surfaces of inner layers of the object being investigated) in the case of modeled or simulated irradiation of the object for the purpose of visualization.
The literature contains a range of illumination models that are used. The Phong or Blinn-Phong model, for example, may be used.
A frequently used method for volume rendering is ray casting (i.e., the simulation of incident light radiation to depict or visualize the body).
With ray casting, imaginary rays that emanate from the eye of an imaginary observer, are sent through the body or object being investigated. Along the rays, RGBA values are determined from the voxels for scanning spots and are combined using alpha compositing or alpha blending to form pixels for a two-dimensional image. Illumination effects are conventionally taken into account using the illumination models discussed above within the framework of a method called “shading.”
The depiction of the object may be appropriately adjusted in order to better be able to study properties of an object depicted using volume rendering. The depiction of the object displayed on a screen may be changed or influenced, for example, by color effect, or removing or enlarging parts of the object (e.g., volume editing and segmentation). Volume editing may be interventions such as clipping, cropping and punching. Segmentation allows object structures, such as anatomical structures of a depicted body part, to be classified. During the course of segmentation, object components, for example, are colored or removed. Direct volume editing may be the interactive editing or influencing of object depiction using virtual tools such as brushes, chisels, drills or knives. For example, the user may interactively change the image of the object displayed on a screen by color effect or cutting away object parts using a mouse or another haptic input device or input device functioning in some other way.
When the depicted object is processed in such a way, it is often not enough to change the calculated pixels of the object image. The pixels are re-calculated, instead. In other words, with many manipulations of this kind (e.g., color effects, clippings), volume rendering is carried out again with every change. The manipulation is then carried out on the volume data used for volume rendering. A method for this has been proposed by Bürger, K. et al., “Direct Volume Editing,” IEEE Transactions on Visualization and Computer Graphics, Vol. 14, No. 6 (2008): pp. 1388-95. This method allows the depiction to be manipulated by direct editing of a replicated volume.
There is a need for flexible, straightforward methods for manipulating the depiction of objects using volume rendering, where, primarily, memory, computing and bandwidth requirements are reduced in comparison with known methods.