PET generates images that represent the distribution of positron-emitting nuclides within the body of a patient. When a positron interacts with an electron by annihilation, the entire mass of a positron-electron pair is converted into two 511-keV photons. The photons are emitted in opposite directions along a line of response (LOR). The annihilation photons are detected by detectors that are placed on both sides of the line of response, in a configuration such as a detector ring. Coincidence occurs when these photons arrive and are detected at the detector elements at the same time. An image is then generated based on the acquired image data that includes the annihilation photon detection information.
Modern PET scanners include a plurality of detector rings coaxially arranged into a plurality of detector blocks. For this configuration, two detectors associated with an LOR are not necessarily within the same detector ring, but may be located within different detector rings. Such scanners increase workflow throughput, improve image quality, and enable a lower dose and scan time. However, each of the detector blocks may be separated by a small gap, which in turn results in a corresponding gap between some of the image slices. If image reconstruction does not account for these gaps, positioning errors arising from these gaps may propagate through the image reconstruction and subsequent image stitching processes. For example, modern PET scanners are typically further equipped with a computed tomography (CT) scanner to enable PET/CT imaging. Assuming the CT imaging perfectly images the body of a patient without positioning errors, substantial mis-registration errors may occur when stitching the CT image and the PET image together. That is, objects in the final PET/CT image may be incorrectly positioned within the image. As another example, for PET-PET stitching, a single point source which in adjacent frames is imaged in opposite ends of the FOV may be broadened into two point sources in the final stitched image.
One solution may involve eliminating the gaps between detector blocks. However, eliminating the gaps would require a substantial redesign of the imaging system itself. Another solution may involve accounting for full detector geometry in the reconstruction process. However, current methods for image reconstruction utilize symmetries of the detector geometry. Accounting for the gap breaks at least some of the symmetries, and as a result substantially slows down the reconstruction time. Methods and systems for correcting errors arising from detector gaps without sacrificing the computational efficiencies based on symmetries of the detector geometry are desirable.