In particular in the field of three-dimensional computed tomography (CT), but also in the case of other X-ray devices, methods are known to calculate a spatial density distribution of an object, for example of a patient or of an industrial object, from projection images captured from different projection angles, thus a three-dimensional image data set. For reconstructions of this kind, reconstruction algorithms of filtered back projection (FBP) are frequently employed. These algorithms are efficient and robust, and are based essentially on the following computation steps.
Initially a weighting of the projection images is conventionally performed, thus a cosine weight and/or redundancy weight are introduced, in order correctly to evaluate the projection data according to its ultimate contribution. Accordingly, filter lines are identified in the projection images, along which a filtering is subsequently to take place. This procedure is widely known; if, for example, the projection images are captured along a circular path (circular CT), then lines running horizontally on the detector are generally selected.
After the filter lines are known, a one-dimensional filtering of the projection data takes place along the filter lines with a high-pass kernel, wherein a ramp filter is used in most cases. The projection data thus filtered is then finally back-projected weighted three-dimensionally into the volume, in order to obtain the three-dimensional image data set.
Such filtered back-projection algorithms are widely known; one of the best known and most frequently used algorithms is the so-called Feldkamp algorithm, cf. here thus the article by L. A. Feldkamp, L. C. Davis and J. W. Kress, “Practical cone-beam algorithm”, J. Opt. Soc. Am. A 1(6), 1984.
In practice—in particular in medical imaging, in non-destructive materials testing or also in security scanners—the projection data is frequently fully known to the effect that the entire object is mapped by means thereby, which means that there exist sub-areas of the object for which the projection data is missing. For example many objects cannot be captured to their complete extent by means of the detector used with the X-ray device (so-called data truncation). However captured projection image areas can basically be completely unusable for the reconstruction, as they for example lie in the shadow of a strongly attenuating metal inserts and thus contain no further information which is sensibly usable the reconstruction (metal artifacts). For the purposes of the present invention it is assumed that such present but unusable projection data can be understood as missing projection data of the object.
In the case of such missing projection data, the problem occurs that the projection image defects are initially distributed along the filter line during the high-pass filtering, where they are then transferred to the image result by means of the back-projection, and generate unwanted artifacts (truncation artifacts, metal artifacts).
In order to reduce truncation artifacts in the use of the Feldkamp algorithm or in similar FBP algorithms, the projection data extrapolation or projection data interpolation method is generally employed. This method is based on two steps: the areas are initially identified, in which projection data defects, i.e. missing projection data, are present. Subsequently, for example, metal regions or truncation edges are identified. The missing projection data within the identified areas is estimated herefrom, for example by means of data interpolation or smooth extrapolation, possibly after a preceding homogenization of the projection images, cf. here for example J. Müller, T. Buzug “Intersection Line Length Normalization in CT Projection Data”, in: Bildverarbeitung fair die Medizin, Springer-Verlag, pages 77-81, 2008.
These steps can essentially be regarded as a pre-processing of the projection images, because the conventional Feldkamp algorithm is then used. Although this leads to an artifact reduction of the reconstructed images, from the practical perspective it involves disadvantageous modifications, in particular in relation to maintenance, implementation and computational efficiency, as ultimately an additional pre-processing is required.