Field of the Invention
The invention concerns a method for reconstructing image data and textural features of an examination region to be mapped. The invention also concerns a method for segmentation of an examination region to be mapped. The invention also concerns an image reconstruction computer and an image segmentation computer, and a computed tomography system for implementing such methods.
Description of the Prior Art
Segmentation of organs in specified images from medical imaging apparatuses is a crucial step in many clinical applications. For example, segmentation of the liver is a necessary step for determining its volume, or partial volumes thereof. Knowledge of this information can then be used to plan, for example, exact operation steps during a liver operation.
Another example relates to determining contours of organs having a high sensitivity to radiation in order to plan radiotherapy treatment. In this context it is important to identify sensitive, healthy anatomical structures, such as the liver or the bladder, in the body of a patient in order to safeguard these healthy structures against damage due to the exposure to radiation that occurs during radiotherapy. The segmented healthy structures and the segmented tumors to be irradiated are then incorporated in a radiotherapy plan, so an optimum result with respect to the health risk and usefulness of the radiotherapy is attained for the patient.
It is also desirable to develop an automated segmentation method that allows fast and exact processing of extensive quantities of data. Previously, some applications have been automated using modern image processing methods. For example, automated liver or heart segmentation is a component of many clinical applications. However, the existing solutions still have drawbacks and, in order to achieve a correct result, still require an intervention by the user in some of the segmentation processes. There is also a large number of applications in which the segmentation process is performed completely manually by contouring anatomical objects in two dimensions using simple geometric tools, and then combining them to form three-dimensional structures (see, for example, FIG. 1).
A problem in such known procedures that the boundaries between different anatomical objects cannot always be clearly identified and current algorithms are not capable of precisely segmenting objects of this kind. For example, in the case of non-contrasted CT image data, voxels that belong to the liver have the same CT values (Hounsfield values) as the voxels that are associated with adjacent muscle tissue.
To be able to also carry out segmentation on the basis of image data, in which boundaries between different anatomical structures cannot be clearly identified in the image data space, information known in advance about the image data has conventionally been incorporated in the applied segmentation algorithms. One such approach is known as machine learning, when a statistical computer model is generated that includes geometric features and textural features, which is determined on the basis of a large amount of image data. A model of this kind is then applied to a patient's anatomy, with individual items of patient information being taken into account in the recorded image data. An approach of this kind enables image sections in which visual differentiation is not possible to be dealt with better. The model geometry is used in these image sections in order to compensate for deficiencies in image contrast. However, not all segmentation problems have been solved with statistical models of this kind. This is because information missing from the image data cannot be exactly compensated for by statistical information.
During image recording with the use of medical imaging systems, raw data, also called scan projection data, are acquired in a first step. Such data correspond, for example in the case of computed tomography, to the absorption of X-rays as a function of different projection angles. Image data are then reconstructed on the basis of raw data using integration methods. In all conventional segmentation methods, all model approaches are limited to the voxel information of the image data from the image data space. However, a significant portion of information is lost during the transformation from raw data to reconstructed image data and this cannot be recovered using the image data alone.