Abstract

Unsupervised Domain Adaptation has greatly boosted the performance of multi-modal medical segmentation when there are only source domain labels and no labels in the target domain. Many previous work relies on Convolutional Neural Networks for distribution or instance alignment, however, the receptive field of CNN makes it overly concerned with texture and local semantic information of images, while losing the global style information and semantic information and lacking relationships between local and global semantics. In order to address these issues, we propose a two-stage, multi-level framework for unsupervised domain adaptation, which consists of an image translation network and a Transformer-based domain adaptation segmentation network, jointly aligning the data distribution in the source and target domains from the image, feature and output level through adversarial learning. Experimental results indicate that our method can achieve satisfactory results and outperform other state-of-the-art medical image segmentation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call