Abstract

Domain adaptation has become an important topic because the trained neural networks from the source domain generally perform poorly in the target domain due to domain shifts, especially for cross-modality medical images. In this work, we present a new unsupervised domain adaptation approach called Multi-Stage GAN (MSGAN) to tackle the problem of domain shift for CT and MRI segmentation tasks. We adopt the multi-stage strategy in parallel to avoid information loss and transfer rough styles on low-resolution feature maps to the detailed textures on high-resolution feature maps. In detail, the style layers map the learnt style codes from the Gaussian noise to the input features in order to synthesize images with different styles. We validate the proposed method for cross-modality medical image segmentation tasks on two public datasets, and the results demonstrate the effectiveness of our method. Clinical relevance- This technique paves the way to translate cross-modality images (MRI and CT) and it can also mitigate the performance degradation when applying deep neural networks in a cross-domain scenario.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call