In order to examine persons, in particular in order to prepare surgical treatments or operations or radiotherapy treatments, particular patient areas of interest are often imaged using known methods, such as for example computer tomography (CT), nuclear spin resonance (MRI) or ultrasound methods. These imaging methods provide a patient-specific data set, such as for example tomographs of an area of an organ, e.g. the liver represented by various grey-scale value distributions.
In order to examine the patient or to prepare a treatment or an operation, it is often important to be able to determine which object or anatomical structure is assigned to a particular grey-scale value distribution of an image measured in this way. For example, it can be important to localize outlines of a particular area of the brain or the surfaces of an object, such as a tumor or a bone in an image.
U.S. patent application Ser. No. 10/430,906 discloses a method for automatically localizing at least one structure in a data set obtained by measurement, said method comprising predetermining a reference data set, determining a mapping function; mapping the reference data set onto the measured data set; and transforming a reference label data set, which is assigned to the reference data set, into an individualized label data set using the determined mapping function.
U.S. Pat. No. 5,633,951 proposes mapping two images obtained from different imaging methods, such as, for example, nuclear spin resonance and computer tomography, onto each other. For aligning these images, a first surface is obtained from one image using individual scanning points which define a particular feature of an object, and the surface of a first image is superimposed onto a corresponding surface of the second image. This method, however, is very costly and requires surfaces to be determined before aligning the images.
U.S. Pat. No. 5,568,384 describes a method for combining three-dimensional image sets into a single, composite image, where the individual images are combined on the basis of defined features of the individual images corresponding to each other. In particular, surfaces are selected from the images and used to find common, matching features.
A method for registering an image comprising a high-deformity target image is known from U.S. Pat. No. 6,226,418 B1. In this method, individual characteristic points are defined in an image and corresponding points are identified in the target image in order to calculate a transformation from these, using which the individual images can be superimposed. This method cannot be carried out automatically and is, consequently, very time-consuming due to its interactive nature.
U.S. Pat. No. 6,021,213 describes a method for image processing, wherein an intensity limit value for particular parts of the image is selected to identify an anatomical area. A number of enlargements or expanding processes of the area are performed using the limit value, until the identified area fulfils particular logical restrictions of the bone marrow. This method is relatively costly and has to be performed separately for each individual anatomical area of interest.
U.S. Pat. No. 7,117,026 discloses a method for non-rigid registration and fusion of images with physiological modelled organ motions resulting from respiratory motion and cardiac motion that are mathematically modelled with physiological constraints. A method of combining images comprises the steps of obtaining a first image dataset of a region of interest of a subject and obtaining a second image dataset of the region of interest of the subject. Next, a general model of physiological motion for the region of interest is provided. The general model of physiological motion is adapted with data derived from the first image data set to provide a subject specific physiological model. The subject specific physiological model is applied to the second image dataset to provide a combined image.
In order to exactly localize particular structures, in for example nuclear spin resonance images, it is often necessary for particular objects or anatomical structures of interest to be manually identified and localized by an expert. This is typically accomplished by individually examining the images taken and highlighting the structures based on the knowledge of the specialist, for example, by using a plotting program or particular markings. This is a very time-consuming, labor-intensive and painstaking task, which is largely dependent on the experience of the expert. Especially if a series of similar images or data sets is taken from a specific region, such as during a breathing cycle, the manual identification of the object in each data set is required and thus quite time consuming.