Electronic image data in more than two spatial dimensions (2D) are widely used for the most varied applications. Image data in three spatial dimensions (3D) are, for example, for three-dimensional simulations of processes, design and construction of spatial objects, and for measuring and optically reproducing such objects.
A particular application is constituted by methods of medical imaging technology where patient bodies are examined in three dimensions using radiological imaging methods, for example, and the three-dimensional examination data are acquired for further processing steps. On the one hand, it is possible in this case in diagnostics to identify examined body volumes of particular interest, so-called hot spots. In nuclear medicine, image regions of increased intensity that indicate the presence of a tumor in the region (increased tissue activity) are denoted as hot spots.
On the other hand, three-dimensional image data of the same body from different imaging methods can be brought together in a common display, a process termed fusion, in order to obtain a more informative image data record that can say more. Data from hot spots can play a particular role during fusion, since they permit the image data of precisely these body volumes from one imaging method to be viewed in the context of the image data of another imaging method. Such a fused image data record includes the hot spots as a partial image data record that can be particularly characterized.
An example of this can be, for example, the fusion of image data from positron emission tomography (PET) and from computed tomography (CT). The PET data constitute a diagnostic data record that includes information relating to specific metabolic functions of the patient body, and is therefore also denoted as functional image data or a functional data record. PET data essentially image soft parts. By contrast, the CT data also image anatomic features, such as bone structure, of the patient body, and therefore enable a viewer to have a substantially better orientation with the aid of the patient's anatomy. Consequently, fusing the functional PET data with the CT data substantially facilitates the anatomic assignment of hot spots identified with the aid of PET.
A particular problem with three-dimensional image data of all applications resides in the restricted possibilities for optical display. It is customary to use two-dimensional display units, computer screens as a rule, that offer only restricted possibilities of visualization in three dimensions. Examples of what is known are perspective displays, tomograms through planes of the object to be displayed, or rotating displays of the object which is visualized either partially in a transparent fashion or in a completely compact fashion.
A range of techniques that can be used in the way described are available for visualizing three-dimensional objects, and these are denoted as volume rendering technic (VRT). It is possible, inter alia, to use a maximum intensity projection (MIP) that respectively defines as a two-dimensional projection pixel the brightest pixel along each line of sight going through the three-dimensional object in a fashion starting from the (virtual) viewer. Alternatively, it is possible to undertake multiplanar reformatting (MPR), in the case of which different two-dimensional projections of the object are displayed, for example mutually perpendicular projections.
The restricted possibilities of optical illustration for three-dimensional image data firstly complicate the orientation in the objects displayed, since the viewer does not have immediate access to the depth information nor, in association therewith, to navigation within the data. This problem likewise arises in viewing, for example in diagnostic evaluation, as in the case of production, for example in three-dimensional construction.
Methods that use a rotating MIP of a functional data record for navigation exist in medical diagnostics. The disadvantage in this is that the anatomical assignment is thereby not yet always unique, for example when two hot spots lie very tightly next to one another. Consequently, these methods require a procedure in two stages, which is therefore troublesome: firstly, a slice plane through the hot spot of interest is laid on the rotating MIP (one-dimensional information), and then this slice must additionally be displayed and the position of the hot spot therein must be determined. Not until then is the three-dimensional information relating to the position available.
Tools for exploring volume data records that comprise two-dimensional and three-dimensional input units are known from the dissertation entitled “3D-EXPLORATION VON VOLUMENDATEN; Werkzeuge zur interaktiven Erkundung medizinischer Bilddaten” [“THREE-DIMENSIONAL EXPLORATION OF VOLUME DATA; Tools for interactive exploration of medical image data”], Oct. 21, 1998, by M. Jahnke. For example, a three-dimensional cursor object is proposed by which it is possible to select a partial volume of a volume data record; this partial volume can be understood as a region of interest (ROI) or voxel of interest (VOI). The partial volume selected in such a way can then be further used as an independent viewing volume within which the exploration is continued.
A further three-dimensional object proposed is referred to as the so-called prober, which constitutes a three-dimensional geometric object, for example a cube. The prober can be positioned like a cursor. It serves the purpose of determining scanned values of the volume respectively surrounded by the prober; in the case of a cube, these scanned values can be two-dimensional projections of the volume onto the cube faces. The tools proposed in the work of M. Jahnke in each case serve the manual exploration of partial volumes.
It is known from section 2.3.5 of the master's thesis entitled “A System for Surgical Planning and Guidance using Image Fusion and Interventional MR” by David T. Gering, submitted to the Massachusetts Institute of Technology in Dec. 1999, to click on a point of a first two-dimensional projection from a three-dimensional electronic data record in order respectively to set the center of the first projection and the centers of a second and a third projection having the same orientation as the first projection on the clicked points, the three projections respectively having different magnification factors, as is illustrated in FIGS. 2 to 7 there.