Registration of medical images is critical for several image analysis applications including computer-aided diagnosis (CAD), interactive cancer treatment planning, and monitoring of therapy progress. Mutual information (MI) is a popular image similarity metric for inter-modal and inter-protocol registration. Most MI-based registration techniques are based on the assumption that a consistent statistical relationship exists between the intensities of the two images being registered. Image intensity information alone however is often insufficient for robust registration. Hence if images A and B belong to different modalities (e.g. magnetic resonance imaging (MRI) and histology) or if B is degraded by imaging artifacts (e.g. background inhomogeneity in MRI or post-acoustic shadowing in ultrasound) then A and B may not share sufficient information with each other to facilitate registration by maximization of MI. Additional information not provided by image intensities of B is required. This may be obtained by transformation of B from intensity space to other feature spaces representations, B′1, B′2, . . . , B′n, that are not prone to the intensity artifacts of B and further explain structural details of A. FIG. 1(c) demonstrates a scenario where an ill-defined intensity based MI similarity metric results in imperfect alignment in registering a prostate histological section (1(a)) with the corresponding MRI section (1(b)). Conventional MI also results in misalignment (FIG. 1(f)) of a T2 MRI brain slice (1(d)) with a T1 MRI brain slice (1(e)) with simulated affine deformation and background inhomogeneity added.
Incorporating additional information to complement the MI metric for improved registration has been investigated previously. Image gradients, cooccurrence matrices, color channels, and connected component labels have all been considered for incorporation by use of weighted sums of MI of multiple image pairs, higher-order joint distributions and MI, and reformulations of MI. The utility of these techniques is constrained by (1) the limited in formation contained in a single arbitrarily chosen feature to complement image intensity and (2) the ad hoc formulations by which the information is incorporated into a similarity metric.
While one may argue that inhomogeneity correction and filtering methods may help overcome limitations of using intensity based MI techniques, it should be noted that these only offer partial solutions and conventional MI often cannot address vast structural differences between different modalities and protocols.
Accordingly, there is a need in the art for a reliable method capable of normalizing data from images taken in different modalities or protocols.