This invention relates generally to normalization of positron emission tomography (PET) systems. More particularly, this invention relates to the determination of normalization factors utilized in image reconstruction.
Various techniques or modalities may be used for medical imaging of, for example, portions of a patient's body. PET imaging is a non-invasive nuclear imaging technique that makes possible the study of the internal organs of a human body. PET imaging allows the physician to view the patient's entire body, producing images of many functions of the human body.
In PET imaging, positron-emitting isotopes are injected into the patient's body. These isotopes are referred to as radiopharmaceuticals, which are short-lived unstable isotopes. Once injected into the body, these isotopes decay and discharge positively-charged particles called positrons. Upon discharge, when these positrons encounter an electron, they are annihilated and converted into a pair of photons. The two photons are emitted in nearly opposite directions. A PET scanner typically includes several coaxial rings of detectors around the patient's body to detect the paired photons from such annihilation events. These rings may be separated by short septa or detector shields.
The detectors include crystals or scintillators to sense the gamma rays colliding with them. Coincidence detection circuits connected to the detectors record only those photons that are detected simultaneously by two detectors on opposite sides of the patient. During a typical scan, millions of detected events are recorded to indicate the number of annihilation events along lines joining pairs of detectors in the ring. When a gamma ray emitted by a source interacts with a crystal in a detector, the crystal converts the gamma ray energy to lower energy scintillation photons that are then detected by a light sensor. The light sensor converts the scintillation photons to an electrical signal, which is processed by associated electronics to identify the crystal with which the gamma ray interacted, the time of interaction, and the number of photons generated by the gamma ray (i.e., the gamma ray energy). The collected data is then used to reconstruct an image.
The data collected during a scan, however, may contain inconsistencies. These inconsistencies may arise due to different factors or operating characteristics of the imaging systems, including the presence of shields or septa between the detector rings of the PET scanner, and the presence of attenuation, scatter and normalization effects. The collected data is therefore corrected prior to using such data for reconstruction of the radioisotope distribution image. One of the corrections uses a set of pre-determined normalization factors for correcting the acquired raw PET data. The normalization factors are unique to a given PET machine, and their values may change with time. Hence, the method followed for determination of the normalization factors, also referred to as the normalization process or normalization, is generally repeated periodically. For example, the normalization factors can be determined every six months.
One known method for performing the normalization process is the ‘rotating rod normalization’ process. In this process, a radioactive positron-emitting rod (line) source is rotated inside the ring of detectors. The responses for all system lines-of-response (LORs) are measured (LORs are the lines traced between the two crystals involved in detecting a coincidence event). Events measured in the LORs are then used to calculate the normalization factors. This normalization process is necessary to correct for detector pair efficiency and geometry differences such that all system LORs can be equalized in their response to a true coincidence event.
One known construction for a PET system is through use of ‘block’ detectors, these being multiple scintillator elements coupled to a common light-sensing component, commonly, a photomultiplier tube or PMT (e.g., a 6×6 element detector block). When a coincidence gamma ray strikes an element of the block detector, the entire block (multiple elements) is used in processing the event, because energy may be deposited in multiple scintillator elements. Therefore, the entire block is unavailable for processing further events for a period of time (e.g., dead-time).
The rotating rod normalization process is affected by the dead-time of the detectors. The dead-time of a detector is defined as the time taken by the detector to process a gamma ray striking the detector. Due to the dead-time, the system may not process all the gamma rays striking the detector. During the normalization scan, the fraction of time during which a detector is busy processing events is known as “block busy”, and includes factors from block and associated electronics losses. In some scanners, the block busy fraction is measured directly by the scanner and reported for each scan. Essentially in these scanners, the block busy fraction may be inferred by the detector's output count rate. The higher the block busy reported for a detector, the higher the likelihood of a gamma ray being missed by the detector. The block busy of a detector depends on the distribution of radioactivity sources within and beyond the scanning field-of-view of that scanner. In the rotating rod normalization scan, block busy is highest when the source is closest to and centered over the detector. The block busy fraction for each detector in the system may be reported only once, or only a few times, during the course of a scan. In situations where the activity distribution is not stationary, such as the normalization scan with a rotating source, the reported block busy data represents the time-averaged block busy fraction through the reporting interval.
In a known PET scanner, the strength of the rod source was chosen such that losses in the gamma ray count due to the dead-time and, therefore, the block busy fraction, were small. In this scanner, the effects of dead-time in the normalization could be ignored. However, in other PET systems, rod sources with higher activity and less favorable source-detector geometry can be employed in order to decrease the time needed to acquire the normalization scan data. Furthermore, the length of the collimators between the source and the detectors may be reduced or totally eliminated. With increased rod source strength and the reduced length (or elimination) of collimators, losses due to dead-time during normalization cannot be ignored without causing artifacts or inaccuracies in the resultant normalization correction.