The present invention relates generally to diagnostic imaging and, more particularly, to a method and apparatus of combining or fusing functional diagnostic data and anatomical diagnostic data acquired of a subject with imaging systems of different modalities to generate a composite image for clinical inspection.
The fusion of functional image data and anatomical image data is a widely-practiced technique to provide composite images for improved pathology identification and clinical diagnosis. Typically, the functional and anatomical image data is acquired using nuclear medicine based systems such as single-photon computed tomography (SPECT), and positron emission tomography (PET) or radiology based imaging systems such as computed tomography (CT), magnetic resonance (MR), ultrasound, and x-ray. Generally, it is desirable to “fuse” an image from SPECT or PET with an image from CT or MR. In this regard, it is typically desired for the functional image from SPECT or PET to be superimposed on the anatomical image acquired using CT or MR.
Fusion of functional and anatomical data that has been acquired separately with imaging systems predicated on different imaging technologies can be problematic. That is, the functional data may be acquired at a different time than the anatomical data. As such, patient positioning between the separate data acquisitions typically varies. Different size acquisitions with different slice thickness and pixel sizes with different central points are also not uncommon. As such, for a clinically valuable composite image to be produced, these differences as well as others typically encountered, must be resolved.
One solution has been the development of a hybrid scanner capable of acquiring PET and CT images during a single scan study in such a manner to avoid many of the drawbacks enumerated above. A combined PET/CT scanner, however, may not be feasible in all circumstances. For instance, it may not be practical for a diagnostic imaging center, hospital, or the like to replace existing PET and CT systems with a combined imager. Moreover, a combined PET/CT scanner, by definition, may generate a composite image of functional and anatomical data acquired using PET and CT, respectively. However, the scanner cannot provide a composite image of PET and MR data, SPECT and MR data, or SPECT and CT data. As such, a hybrid system may not address the myriad of diagnostic needs required of a radiologist or other health care provider in rendering a diagnosis to a patient.
Another solution that is consistent with conventional fusion techniques fails to adequately address the drawbacks associated with the overlaying of collocated functional and anatomical data that are not registered. That is, present fusion protocols combine data having a common coordinate alignment, but fail to register the functional and anatomical images. Registering is commonly defined as the process of aligning medical image data. This is based on the premise that the functional and anatomical data sets were acquired under identical physiological states and therefore can be fused without taking additional measures into account. In this regard, conventional fusion techniques orientate the functional and anatomical data but do not take measures to sufficiently align the functional and anatomical data. Furthermore, the image resolution from PET and SPECT are limited by maximum energy resolution of positron-emitting isotopes. The resolution of functional images compared to anatomical images is notably inferior. Another consideration that specifically affects cardiac imaging is the considerable amount of motion that can add additional blurring to any image set. The goal of anatomical imaging in the heart is to observe the heart without motion. Functional imaging of the heart can compensate for motion by dividing the imaging into bins but the number of bins is the denominator when the total dataset is the numerator. The number of coincidence events is limited to the number of radioactive decay events and being able to observe as much data as possible is desirable for a successful diagnosis. As a result, the radiologist or other health care provider must decipher a single composite image with the functional and anatomical information, with respect to one another, being misaligned. Additional post-fusion processing steps may be taken to correct the misalignment of the respective images.
A conventional fusion of CT and PET image data is illustrative of the above drawbacks. During a PET/CT cardiac acquisition, the CT study is performed with ECG gating and the PET study may or may not be performed with ECG gating. The anatomical position of the heart typically changes relative to the ECG cycle. During image processing the CT image is reconstructed from a portion of the data centered on a selected phase during the cardiac cycle in order to provide an image with the least amount of motion blurring artifacts. The CT coronary arteries are then tracked and segmented out of the CT image. The segmented images retain the coordinate system of the original data frozen at one particular phase of the cardiac cycle. A static or dynamic PET image may then be reconstructed from the entire set of PET data that is averaged over many ECG cycles. A gated PET image set is reconstructed for each bin in the gated study. One of these bins may correspond to the selected phase for which the CT data set was reconstructed. The alignment may further improve with such conditions. These PET images are then processed such that the left ventricle is segmented based on the long axis of the heart. Using this information, a PET 3D model can be displayed in “model” space that approximates the anatomical shape of a left ventricle. The CT image is then fused with the PET image along the model coordinates to form a composite image. However, the respective images from which the composite image is formed are not registered because the coordinate systems are not common to both image sets. Depending on the amount of image blurring due to radioactive tracer energy, degree of cardiac motion, and the modeling techniques, different amounts of misalignment may be introduced. As such, the composite image typically must undergo additional and time-consuming processing to effectively align the functional data with the anatomical data in a clinical area of interest to provide optimal images for diagnosis.
Another classic multi-modality paradigm aligns internal or external fiducial markers from a functional image with corresponding anatomical points on an anatomical image. This conventional fiducial marker-based system implements a manual method of fusion that does not take local variations in the datasets into account. The conventional automated rigid or non-rigid body registration process uses mutual information as the cost function for highlighting differences between the functional and anatomical images. The cost function therefore defines or guides the registration process of the functional data to the anatomical data. There are also methods that use fiducial markers and rigid and non-rigid affine transformation to register images. However, these automated methods do not use any localized anatomical constraints to guide them. As a result, these conventional approaches may only perform data-to-data fusion and, as such, are inapplicable when fusion between data and modeled data, or fusion between modeled data and modeled data is desired.
Therefore, it would be desirable to design an apparatus and method of fusing multi-modality images such that alignment is resolved prior to the fusion of the separate images such that post-fusion processing is reduced and supports fusion of modeled functional and/or anatomical data.