Medical image modalities such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), and Ultrasound are capable of providing 3D views of a patient's anatomy. However, the data acquired by such modalities only provides a static view of the patient. Although the acquired data can be displayed and/or manipulated in a variety of ways (e.g., through rotation), this manipulation is generally limited and clinicians cannot always achieve all the views that they would desire to make a thorough examination of the patient. Moreover, once acquired, the data is divorced from the real-world and the clinician is required to mentally map the displayed image to locations of a patient's physical anatomy.
The positioning capabilities of mobile devices have evolved quickly over the past several years. Upcoming mobile devices include sensor technology which provides accurate location measurements in six degrees of freedoms (location and camera view). This information will support many applications since it enables navigation and interaction in 3D space easily.
Accordingly, it is desired to create a technique for performing visualization and rendering of a 3D medical image modality using a mobile device.