Medical imaging is one of the most useful diagnostic tools available in modern medicine. Medical imaging allows medical personnel to non-intrusively look into a living body in order to detect and assess many types of injuries, diseases, conditions, etc. Medical imaging allows doctors and technicians to more easily and correctly make a diagnosis, decide on a treatment, prescribe medication, perform surgery or other treatments, etc.
There are medical imaging processes of many types and for many different purposes, situations, or uses. They commonly share the ability to create an image of a bodily region of a patient, and can do so non-invasively. Examples of some common medical imaging types are nuclear medical (NM) imaging such as positron emission tomography (PET) and single photon emission computed tomography (SPECT). Using these or other imaging types and associated apparatus, an image or series of images may be captured. Other devices may then be used to process the image in some fashion. Finally, a doctor or technician may read the image in order to provide a diagnosis.
A PET camera works by detecting pairs of gamma ray photons in time coincidence. The two photons arise from the annihilation of a positron and electron in the patient's body. The positrons are emitted from a radioactive isotope that has been used to label a biologically important molecule (a radiopharmaceutical). Hundreds of millions such decays occur per second in a typical clinical scan. Because the two photons arising from each annihilation travel in opposite directions, the rate of detection of such coincident pairs is proportional to the amount of emission activity, and hence the molecule, along the line connecting the two detectors at the respective points of gamma ray interaction. In a PET camera the detectors are typically arranged in rings around the patient. By considering coincidences between all appropriate pairs of these detectors, a set of projection views can be formed, each element of which represents a line integral, or sum, of the emission activity in the patient's body along a well defined path. These projections are typically organized into a data structure called a sinogram, which contains a set of plane parallel projections at uniform angular intervals around the patient. A three dimensional image of the radiopharmaceutical's distribution in the body then can be reconstructed from these data.
One particular nuclear medicine imaging technique is known as Positron Emission Tomography, or PET. PET is used to produce images for diagnosing the biochemistry or physiology of a specific organ, tumor or other metabolically active site. Measurement of the tissue concentration of a positron emitting radionuclide is based on coincidence detection of the two gamma photons arising from positron annihilation. When a positron is annihilated by an electron, two 511 keV gamma photons are simultaneously produced and travel in approximately opposite directions. Gamma photons produced by an annihilation event can be detected by a pair of oppositely disposed radiation detectors capable of producing a signal in response to the interaction of the gamma photons with a scintillation crystal. Annihilation events are typically identified by a time coincidence between the detection of the two 511 keV gamma photons in the two oppositely disposed detectors, i.e., the gamma photon emissions are detected virtually simultaneously by each detector. When two oppositely disposed gamma photons each strike an oppositely disposed detector to produce a time coincidence event, they also identify a line of response, or LOR, along which the annihilation event has occurred. An example of a PET method and apparatus is described in U.S. Pat. No. 6,858,847, which patent is incorporated herein by reference in its entirety.
After being sorted into parallel projections, the LORs defined by the coincidence events are used to reconstruct a three-dimensional distribution of the positron-emitting radionuclide within the patient. In two-dimensional PET, each 2D transverse section or “slice” of the radionuclide distribution is reconstructed independently of adjacent sections. In fully three-dimensional PET, the data are sorted into sets of LORs, where each set is parallel to a particular detector angle, and therefore represents a two dimensional parallel projection p(s, φ) of the three dimensional radionuclide distribution within the patient, where s corresponds to the distance along the imaging plane perpendicular to the scanner axis and φ corresponds to the angle of the detector plane with respect to the x axis in (x, y) coordinate space (in other words, φ corresponds to a particular LOR direction). Coincidence events are integrated or collected for each LOR and stored as a sinogram. In this format, a single fixed point in f(x,y) traces a sinusoid in the sinogram. In each sinogram, there is one row containing the LORs for a particular azimuthal angle φ; each such row corresponds to a one-dimensional parallel projection of the tracer distribution at a different coordinate along the scanner axis. This is shown conceptually in FIG. 1.
A SPECT camera functions similarly to a PET camera, but detects only single photons rather than coincident pairs. For this reason, a SPECT camera must use a lead collimator with holes, placed in front of its detector panel, to pre-define the lines of response in its projection views. One or more such detector panel/collimator combinations rotates around a patient, creating a series of planar projections each element of which represents a sum of the emission activity, and hence biological tracer, along the line of response defined by the collimation. As with PET, these data can be organized into sinograms and reconstructed to form an image of the radiopharmaceutical tracer distribution in the body.
The purpose of the reconstruction process is to retrieve the spatial distribution of the radiopharmaceutical from the projection data. A conventional reconstruction step involves a process known as back-projection. In simple back-projection, an individual data sample is back-projected by setting all the image pixels along the line of response pointing to the sample to the same value. In less technical terms, a back-projection is formed by smearing each view back through the image in the direction it was originally acquired. The back-projected image is then taken as the sum of all the back-projected views. Regions where back-projection lines from different angles intersect represent areas which contain a higher concentration of radiopharmaceutical.
While back-projection is conceptually simple, it does not by itself correctly solve the reconstruction problem. A simple back-projected image is very blurry; a single point in the true image is reconstructed as a circular region that decreases in intensity away from the center. In more formal terms, the point spread function (PSF) of back-projection is circularly symmetric, and decreases as the reciprocal of its radius.
Filtered back-projection (FBP) is a technique to correct the blurring encountered in simple back-projection. Each projection view is filtered before the back-projection step to counteract the blurring point spread function. That is, each of the one-dimensional views is convolved with a one-dimensional filter kernel (e.g. a “ramp” filter) to create a set of filtered views. These filtered views are then back-projected to provide the reconstructed image, a close approximation to the “correct” image.
The inherent randomness of radioactive decay and other processes involved in generating nuclear medical image data results in unavoidable statistical fluctuations (noise) in PET or SPECT data. This is a fundamental problem in clinical imaging that is dealt with through some form of smoothing of the data. In FBP this is usually accomplished by modifying the filter kernel used in the filtering step by applying a low-pass windowing function to it. This results in a spatially uniform, shift-invariant smoothing of the image that reduces noise, but may also degrade the spatial resolution of the image. A disadvantage of this approach is that the amount of smoothing is the same everywhere in the image although the noise is not. Certain regions, e.g. where activity and detected counts are higher, may have relatively less noise and thus require less smoothing than others. Standard windowed FBP cannot adapt to this aspect of the data.
There are several alternatives to FBP for reconstructing nuclear medical data. In fact, most clinical reconstruction of PET images is now based on some variant of regularized maximum likelihood (RML) estimation because of the remarkable effectiveness of such algorithms in reducing image noise compared to FBP. In a sense, RML's effectiveness stems from its ability to produce a statistically weighted localized smoothing of an image. These algorithms have some drawbacks however: they are relatively expensive because they must be computed iteratively; they generally result in poorly characterized, noise dependent, image bias, particularly when regularized by premature stopping (unconverged); and the statistical properties of their image noise are difficult to determine.
In a class of algorithms for calculating projections known as the Square Pixel Method, the basic assumption is that the object considered truly consists of an array of N×N square pixels, with the image function f(x, y) assumed to be constant over the domain of each pixel. The method proceeds by evaluating the length of intersection of each ray with each pixel, and multiplying the value of the pixel (S).
The major problem of this method is the unrealistic discontinuity of the model. This is especially apparent for rays whose direction is exactly horizontal or vertical, so that relatively large jumps occur in S values as the rays cross pixel boundaries.
A second class of algorithms for calculating projections is the forward projection method. This method is literally the adjoint of the process of “back projection” of the FBP reconstruction algorithm. The major criticism of this algorithm is that the spatial resolution of the reprojection is lessened by the finite spacing between rays. Furthermore, increasing the number of pixels does not contribute to a reduction in this spacing, but does greatly increase processing time.
A third algorithm for calculating projections based on line-integral approximation, developed by Peter M. Joseph and described in the paper entitled An Improved Algorithm for Reprojecting Rays Through Pixel Images, IEEE Transactions on Medical Imaging, Vol. MI-1, No. 3, pp. 192-196, November 1982 (hereinafter, “Joseph's Method”), incorporated by reference herein in its entirety, is similar to the structure of the square pixel method. Each given ray K is specified exactly as a straight line. The basic assumption is that the image is a smooth function of x and y sampled on the grid of positions. The line integral desired is related to an integral over either x or y depending on whether the ray's direction lies closer to the x or y axis. While this algorithm produces a much clearer image than the other two methods, it is slower than either method, especially when interpolating oblique segments. When interpolating oblique segments, an interpolation is required in both the transaxial and axial directions for each ray, further slowing the process.
FIG. 2 presents the basic ideas behind Joseph's method. For a given row y in the image, each projection in the 2D segment, e.g., 210, receive contributions from the two nearest voxels. The interpolation coefficients are defined by the distance between the centers of the two voxels and the point where the projection (ray) intersects the horizontal line, passing through the center of the two voxels. Note that each projection ray has different interpolation coefficients, since the distance between ray intersection points (1/cos θ) is not equal to the distance between voxel centers.