This invention relates generally to an imaging device and method and, in particular, to a medical imaging device and method.
While invasive surgery may have many beneficial effects, it can cause physical and psychological trauma to the patient from which recovery is difficult. A variety of minimally invasive surgical procedures are therefore being developed to minimize trauma to the patient. However, these procedures often require physicians to perform delicate procedures within a patient""s body without being able to directly see the area of the patient""s body on which they are working. It has therefore become necessary to develop imaging techniques to provide the medical practitioner with information about the interior of the patient""s body.
Additionally, a non-surgical or pre-surgical medical evaluation of a patient frequently requires the difficult task of evaluating imaging from several different modalities along with a physical examination. This requires mental integration of numerous data sets from the separate imaging modalities, which are seen only at separate times by the physician.
A number of imaging techniques are commonly used today to gather two-, three- and four-dimensional data. These techniques include ultrasound, computerized X-ray tomography (CT), magnetic resonance imaging (MRI), electric potential tomography (EPT), positron emission tomography (PET), brain electrical activity mapping (BEAM), magnetic resonance angiography (MRA), single photon emission computed tomography (SPECT), magnetoelectro-encephalography (MEG), arterial contrast injection angiography, digital subtraction angiography and fluoroscopy. Each technique has attributes that make it more or less useful for creating certain kinds of images, for imaging a particular part of the patient""s body, for demonstrating certain kinds of activity in those body parts and for aiding the surgeon in certain procedures. For example, MRI can be used to generate a three-dimensional representation of a patient""s body at a chosen location. Because of the physical nature of the MRI imaging apparatus and the time that it takes to acquire certain kinds of images, however, it cannot conveniently be used in real time during a surgical procedure to show changes in the patient""s body or to show the location of surgical instruments that have been placed in the body. Ultrasound images, on the other hand, may be generated in real time using a relatively small probe. The image generated, however, lacks the accuracy and three-dimensional detail provided by other imaging techniques.
Medical imaging systems that utilize multimodality images and/or position-indicating instruments are known in the prior art. Hunton, N., Computer Graphics World (October 1992, pp. 71-72) describes a system that uses an ultrasonic position-indicating probe to reference MRI or CT images to locations on a patient""s head. Three or four markers are attached to the patient""s scalp prior to the MRI and/or CT scans. The resulting images of the patient""s skull and brain and of the markers are stored in a computer""s memory. Later, in the operating room, the surgeon calibrates a sonic probe with respect to the markers (and, therefore, with respect to the MRI or CT image) by touching the probe to each of the markers and generating a sonic signal which is picked by four microphones on the operating table. The timing of the signals received by each microphone provides probe position information to the computer. Information regarding probe position for each marker registers the probe with the MRI and/or CT image in the computer""s memory. The probe can thereafter be inserted into the patient""s brain. Sonic signals from the probe to the four microphones will show how the probe has moved within the MRI image of the patient""s brain. The surgeon can use information of the probe""s position to place other medical instruments at desired locations in the patient""s brain. Since the probe is spacially located with respect to the operating table, one requirement of this system is that the patient""s head be kept in the same position with respect to the operating table as well. Movement of the patient""s head would require a recalibration of the sonic probe with the markers.
Grimson, W. E. L., et al., xe2x80x9cAn Automatic Registration Method for Frameless Stereotaxy, Image Guided Surgery, and Enhanced Reality Visualization,xe2x80x9d IEEE CVPR ""94 Proceedings (June 1994, pp. 430-436) discuss a device which registers three-dimensional data with a patient""s head on the operating table and calibrates the position of a video camera relative to the patient using distance information derived from a laser rangefinder, cross correlating laser rangefinder data with laser scan-line image data with medical image data. The system registers MRI or CT scan images to the patient""s skin surface depth data obtained by the laser range scanner, then determines the position and orientation of a video camera relative to the patient by matching video images of the laser points on an object to reference three-dimensional laser data. The system, as described, does not function at an interactive rate, and hence, the system cannot transform images to reflect the changing point of view of an individual working on the patient. Because the system is dependent upon cumbersome equipment such as laser rangefinders which measure distance to a target, it cannot perform three-dimensional image transformations guided by ordinary intensity images. The article mentions hypothetically using head-mounted displays and positioning a stationary camera xe2x80x9cin roughly the viewpoint of the surgeon, i.e. looking over her shoulder.xe2x80x9d Although the article remarks that xe2x80x9cviewer location can be continually tracked,xe2x80x9d there is no discussion on how the authors would accomplish this.
Kalawasky, R., xe2x80x9cThe Science of Virtual Reality and Virtual Environments,xe2x80x9d pp. 315-318 (Addison-Wesley 1993), describes an imaging system that uses a position sensing articulated arm integrated with a three-dimensional image processing system such as a CT scan device to provide three-dimensional information about a patient""s skull and brain. As in the device described by Hunton, metallic markers are placed on the patient""s scalp prior to the CT scan. A computer develops a three-dimensional image of the patient""s skull (including the markers) by taking a series of xe2x80x9cslicesxe2x80x9d or planar images at progressive locations, as is common for CT imaging, then interpolating between the slices to build the three-dimensional image. After obtaining the three-dimensional image, the articulated arm can be calibrated by correlating the marker locations with the spacial position of the arm. So long as the patient""s head has not moved since the CT scan, the arm position on the exterior of the patient can be registered with the three-dimensional CT image.
Heilbrun, M. P., xe2x80x9cThe Evolution and Integration of Microcomputers Used with the Brown-Roberts-Wells (BRW) Image-guided Stereotactic System,xe2x80x9d (in Kelly, P. J., et al. xe2x80x9cComputers in Stereotactic Neurosurgery,xe2x80x9d pp. 43-55 (Blackwell Scientific Publications 1992)) briefly mentions the future possibility of referencing (within the same image set) intracranial structures to external landmarks such as a nose. However, he does not describe how this would be accomplished, nor does he describe such a use for multimodality image comparison or compositing.
Peters, T. M., et al., (in Kelly, P. J., et al. xe2x80x9cComputers in Stereotactic Neurosurgery,xe2x80x9d p. 196 (Blackwell Scientific Publications 1992)) describe the use of a stereotactic frame with a system for using image analysis to read position markers on each tomographic slice taken by MR or CT, as indicated by the positions of cross-sections of N-shaped markers on the stereotactic frame. While this method is useful for registering previously acquired tomographic data, it does not help to register a surgeon""s view to that data. Furthermore, the technique cannot be used without a stereotactic frame.
Goerss, S. J., xe2x80x9cAn Interactive Stereotactic Operating Suite,xe2x80x9d and Kall, B. A., xe2x80x9cComprehensive Multimodality Surgical Planning and Interactive Neurosurgery,xe2x80x9d (both in Kelly, P. J., et al. xe2x80x9cComputers in Stereotactic Neurosurgery,xe2x80x9d pp. 67-86, 209-229 (Blackwell Scientific Publications 1992)) describe the Compass(trademark) system of hardware and software. The system is capable of performing a wide variety of image processing functions including the automatic reading of stereotactic frame fiducial markers, three-dimensional reconstructions from two-dimensional data, and image transformations (scaling, rotating, translating). The system includes an xe2x80x9cintramicroscopexe2x80x9d through which computer-generated slices of a three-dimensionally reconstructed tumor correlated in location and scale to the surgical trajectory can be seen together with the intramicroscope""s magnified view of underlying tissue. Registration of the images is not accomplished by image analysis, however. Furthermore, there is no mention of any means by which a surgeon""s instantaneous point of view is followed by appropriate changes in the tomographic display. This method is also dependent upon a stereotactic frame, and any movement of the patient""s head would presumably disable the method.
Suetens, P., et al. (in Kelly, P. J., et al. xe2x80x9cComputers in Stereotactic Neurosurgery,xe2x80x9d pp. 252-253 (Blackwell Scientific Publications 1992)) describe the use of a head mounted display with magnetic head trackers that changes the view of a computerized image of a brain with respect to the user""s head movements. The system does not, however, provide any means by which information acquired in real time during a surgical procedure can be correlated with previously acquired imaging data.
Roberts, D. W., et al., xe2x80x9cComputer Image Display During Frameless Stereotactic Surgery,xe2x80x9d (in Kelly, P. J., et al. xe2x80x9cComputers in Stereotactic Neurosurgery,xe2x80x9d pp. 313-319 (Blackwell Scientific Publications 1992)) describe a system that registers pre-procedure images from CT, MRI and angiographic sources to the actual location of the patient in an operating room through the use of an ultrasonic rangefinder, an array of ultrasonic microphones positioned over the patient, and a plurality of fiducial markers attached to the patient. Ultrasonic xe2x80x9cspark gapsxe2x80x9d are attached to a surgical microscope so that the position of the surgical microscope with respect to the patient can be determined. Stored MRI, CT and/or angiographic images corresponding to the microscope""s focal plane may be displayed
Kelly, P. J. (in Kelly, P. J., et al. xe2x80x9cComputers in Stereotactic Neurosurgery,xe2x80x9d p. 352 (Blackwell Scientific Publications 1992)) speculates about the future possibility of using magnetic head tracking devices to cause the surgical microscope to follow the surgeon""s changing field of view by following the movement within the established three-dimensional coordinate system. Insufficient information is given to build such a system, however. Furthermore, this method would also be stereotactic frame dependent, and any movement of the patient""s head would disable the coordinate correlation.
Krueger, M. W., xe2x80x9cThe Emperor""s New Realities,xe2x80x9d pp. 18-33, Virtual Reality World (November/December 1993) describes generally a system which correlates real time images with stored images. The correlated images, however, are of different objects, and the user""s point of view is not tracked.
Finally, Stone, R. J., xe2x80x9cA Year in the Life of British Virtual Realityxe2x80x9d, p. 49-61, Virtual Reality World (January/February 1994) discusses the progress of Advanced Robotics Research Limited in developing a system for scanning rooms with a laser rangefinder and processing the data into simple geometric shapes xe2x80x9csuitable for matching with a library of a priori computer-aided design model primitives.xe2x80x9d While this method seems to indicate that the group is working toward generally relating two sets of images acquired by different modalities, the article provides no means by which such matching would be accomplished. Nor does there seem to be classification involved at any point. No means are provided for acquiring, processing, and interacting with image sets in real time, and no means are provided for tracking the instantaneous point of view of a user who is performing a procedure, thereby accessing another data set.
As can be appreciated from the prior art, it would be desirable to have an imaging system capable of displaying single modality or multimodality imaging data, in multiple dimensions, in its proper size, rotation, orientation, and position, registered to the instantaneous point of view of a physician examining a patient or performing a procedure on a patient. Furthermore, it would be desirable to do so without the expense, discomfort, and burden of affixing a stereotactic frame to the patient in order to accomplish these goals. It would also be desirable to utilize such technology for non-medical procedures such as the repair of a device contained within a sealed chassis.
This invention provides a method and apparatus for obtaining and displaying in real time an image of an object obtained by one modality such that the image corresponds to a line of view established by another modality. In a preferred embodiment, the method comprises the following steps: obtaining a follow image library of the object via a first imaging modality; providing a lead image library obtained via the second imaging modality; referencing the lead image library to the follow image library; obtaining a lead image of the object in real time via the second imaging modality along a lead view; comparing the real time lead image to lead images in the lead image library via digital image analysis to identify a follow image line of view corresponding to the lead view; transforming the identified follow image to correspond to the scale, rotation and position of the lead image; and displaying the transformed follow image, the comparing, transforming and displaying steps being performed substantially simultaneously with the step of obtaining the lead image in real time.
The invention is described in further detail below with reference to the drawings.