It is sometimes necessary to join together two images to make a larger, composite image. These images typically have some overlapping features in common which must be aligned properly to create the larger image. Typically, in plane translational offsets must be calculated to align the images. The problem is compounded when dealing with 3D volume data. For example, to diagnose and analyze scoliosis using MRI, separate 3D volume data sets of the upper and lower spine are acquired in order to achieve the necessary field-of-view while preserving necessary image quality and detail. In this case, both in-plane and out-of-plane translational offsets must be computed to align the volume data.
Presently used techniques register two images from differing modalities (e.g., MRI and CT) for the purpose of fusing the data. Both modalities cover the same volume of anatomy and several control points are used to identify common features. Having to use two different modalities with several control points increases the complexity of these techniques.
Of utility then are methods and system that reduce the complexity of prior art systems and processes in joining volume image data from a single modality.