Abstract

Deep convolutional networks have demonstrated state-of-the-art performance on various challenging medical image processing tasks. Leveraging images from different modalities for the same analysis task holds large clinical benefits. However, the generalization capability of deep networks on test data sampled from different distribution remains as a major challenge. In this paper, we propose a plug-and-play adversarial domain adaptation network (PnP-AdaNet) for adapting segmentation networks between different modalities of medical images, e.g., MRI and CT. We tackle the significant domain shift by aligning the feature spaces of source and target domains at multiple scales in an unsupervised manner. With the adversarial loss, we learn a domain adaptation module which flexibly replaces the early encoder layers of the source network, and the higher layers are shared between two domains. We validate our domain adaptation method on cardiac segmentation in unpaired MRI and CT, with four different anatomical structures. The average Dice achieved 63.9%, which is a significant recover from the complete failure (Dice score of 13.2%) if we directly test an MRI segmentation network on CT data. In addition, our proposed PnP-AdaNet outperforms many state-of-the-art unsupervised domain adaptation approaches on the same dataset. The experimental results with comprehensive ablation studies have demonstrated the excellent efficacy of our proposed method for unsupervised cross-modality domain adaptation. Our code is publically available at https://github.com/carrenD/Medical-Cross-Modality-Domain-Adaptation

Highlights

  • Deep learning models, especially the convolutional neural networks (CNNs), have achieved remarkable successes during the past years, achieving state-of-the-art or even human-level performance on a variety of challenging medical imaging problems [1]–[3]

  • Cross-modality image translation for improving cardiac segmentation with synthetic data. These works did not aim at our topic of unsupervised domain adaptation of CNNs, which is in principle much more difficult since annotation of target domain is completely unavailable

  • We present a flexible plug-and-play adversarial domain adaptation network, called PnP-AdaNet, which effectively aligns the feature space of the target domain to that of the source domain

Read more

Summary

INTRODUCTION

Especially the convolutional neural networks (CNNs), have achieved remarkable successes during the past years, achieving state-of-the-art or even human-level performance on a variety of challenging medical imaging problems [1]–[3]. In terms of GAN based domain adaptation, another stream of solutions align input spaces of networks instead They make use of unsupervised image-to-image translation, i.e., training the network with target-like synthetic source data, or testing with source-like target ones [12], [25], [26]. Ren et al [34] utilized adversarial learning to align the feature distribution of target images to the source domain for classifying histology images obtained in different staining procedures These works have demonstrated that imposing alignment in feature space helps to generalize deep models to new data from a different domain. Cross-modality image translation for improving cardiac segmentation with synthetic data These works did not aim at our topic of unsupervised domain adaptation of CNNs, which is in principle much more difficult since annotation of target domain is completely unavailable.

METHODS
LOSS FUNCTIONS AND TRAINING STRATEGIES
DISCUSSIONS
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call