Tomographic medical imaging employs the collection of data from a plurality of views of a body. The views are processed mathematically to produce representations of contiguous cross-sectional images. Such cross-sectional images are of great value to the medical diagnostician in a non-invasive investigation of internal body structure. The technique employed to collect the data is a matter of indifference to the present invention. Any technique such as, for example, X-ray computed tomography, nuclear magnetic resonance tomography, single-photon emission tomography, positron emission tomography, or ultrasound tomography may serve equally.
A body to be imaged exists in three dimensions. Tomographic devices process data for presentation as a series of contiguous cross-sectional slices along selectable axes through the body. Each cross sectional slice is made up of a number of rows and columns of voxels (parallelepiped volumes), each represented by a digitally-stored number related to a computed signal intensity in the voxel. In practice, an array of, for example, 64 slices may each contain 512 by 512 voxels. In normal use, a diagnostician reviews images of a number of individual slices to derive the desired information. In cases where information about a surface within the body is desired, the diagnostician relies on inferences of the 3D nature of the object derived from interrogating the cross-sectional slices. At times, it is difficult or impossible to attain the required inference from reviewing contiguous slices. In such cases, a synthesized 3D image would be valuable.
Synthesizing a 3D image from tomographic data is a two-step process. In the first step, a mathematical description of the desired object is extracted from the tomographic data. In the second step, the image is synthesized from the mathematical description.
Dealing with the second step first, assuming that a surface description can be synthesized from knowledge of the slices, the key is to go from the surface to the 3D image. The mathematical description of the object is made up of the union of a large number of surface elements (SURFELS). The surfels are operated on by conventional computer graphics software, having its genesis in computer aided design and computer aided manufacturing, to apply surface shading to objects to aid in image interpretation through a synthesized two-dimensional image. The computer graphics software projects the surfels onto a rasterized image and determines which pixels of the rasterized image are turned on, and with what intensity or color. Generally, the shading is lightest for image elements having surface normals along an operator-selected line of sight and successively darker for those elements inclined to the line of sight. Image elements having surface normals inclined more than 90 degrees from the selected line of sight are hidden in a 3D object and are suppressed from the display. Foreground objects on the line of sight hide background objects. The shading gives a realistic illusion of three dimensions. It is thus apparent that the information provided by the surface normals to the line of sight are very important in producing a realistic 3D image.
Returning now to the problem of extracting a mathematical description of the desired surface from the tomographic slice data, this step is broken down into two subtasks, namely the extraction of the object from the tomographic data, and the fitting of the surface to the extracted object. A number of ways are available to do the first subtask. For example, it is possible to search through the signal intensities in the voxels of a slice to discern regions where the material forming the object has sufficient signal contrast with surrounding regions. For example, signal intensities characteristic of bone in X-ray tomography produce striking contrast with surrounding tissue. A threshold may then be applied to the voxels to identify each one in the complete array lying in the desired object from all not in the object.
Advancing to the second subtask, there are also a number of ways to fit the surface to the extracted object. Work has been done, for example, by Gabor Herman on a method in which each voxel is analyzed to determine whether or not it belongs to the desired object. If it does belong to the object, six rectangular surfels, along with their surface normals, representing the six surfaces of the voxel, are derived for input to computer graphics software. At least partly due to the differences between normals to the actual surface and the normals derived from the orientations of the surfaces of the voxels, this technique produces images of relatively low quality.
A method, called the marching cubes method, disclosed in U.S. Pat. No. 4,710,876, filed June 5, 1985, which has been allowed, overcomes many of the drawbacks of the above-mentioned prior work. The disclosure of this referenced patent is herein incorporated by reference. In this technique, signal values in eight cubically adjacent voxels in the tomographic array are examined for those having a specified relationship to a selected threshold value. When the relationship is found, a binary vector is generated characterizing the manner in which the surface of the object passes through the volume defined by the eight cubically adjacent voxels. Up to four triangular surface elements may be defined in such a volume. Normal vectors to the all surface elements thus discovered are input to computer graphics software for display of a shaded 3D image.
The marching cubes method is successful in improving the 3D representation of objects derived from tomographic data. It is believed that the 3D image quality produced by surface data calculated from the marching cubes method is improved because the surface normals to the image thus derived are equal to the normalized 3D gradient of the original tomographic data.
A further technique, called the dividing cubes method, for deriving surface data is disclosed in U.S. Pat. No. 4,719,585, filed Aug. 28, 1985. The disclosure of this referenced patent is herein incorporated by reference. The dividing cubes method divides the voxel in the tomographic array to values which the computer graphics software can treat as a point for scan conversion onto the raster scan of the 3D image. Surface normals to each point are derived from the normalized gradients of the tomographic data.
The marching cubes and dividing cubes methods produce an imaging artifact to which the present invention is addressed. A mismatch exists between the data available from the marching cubes or dividing cubes method and the data that can be handled by the conventional computer graphics software and hardware. This mismatch is triggered by the disparity between the number of row or columns in a slice (generally equal) and the number of slices. At some places on a rasterized 3D image, the method may define more than one surface element for mapping onto the same pixel in the image. The conflicting surface normals may point in different directions. The computer graphics software, without information to guide it, selects one of the surface normals to apply shading in that location, whether it is correct or not. It has been discovered that this effect produces ring-type artifacts at the top of a 3D image of the human skull.