The present invention relates to data synthesize medical image data, and more particularly, to synthesizing medical image data across image domains or image modalities.
In many practical medical image analysis problems, a situation is often encountered in which medical image data available for training, for example for machine learning based anatomical object detection, has a different distribution or representation than the medical image data given during testing. The difference is typically due to modality heterogeneity or domain variation. For example, a magnetic resonance (MR) image is different from a computed tomography (CT) image for the same patient, MR images are different across different protocols, contrast CT images are different from non-contrast CT images, and CT images captured with low kV are different from CT images captured with high kV.
The discrepancy between training and testing data is an important factor that leads to poor performance of many medical image analysis algorithms, such as anatomical object detection and segmentation algorithms. Accordingly, a mechanism for intelligently adapting medical analysis image systems to new modalities or domains of medical image data without having to spend the effort to collect a large number of new data samples is desirable.