Abstract

Domain adaptation is an important task for medical image analysis to improve generalization on datasets collected from diverse institutes using different scanners and protocols. For images with visible domain shift, using image translation models is an intuitive and effective way to perform domain adaptation, but the structure of the generated image may often be distorted when large content discrepancies between domains exist; resulting in poor downstream task performance. To address this, we propose a novel image translation model that disentangles structure and texture to only transfer the latter by using mutual information and texture co-occurrence losses. We translate source domain images to the target domain and employ the generated results as augmented samples for domain adaptation segmentation training. We evaluate our method on three public segmentation datasets: MMWHS, Fundus, and Prostate datasets acquired from diverse institutes. Experimental results show that a segmentation model trained using the augmented images from our approach outperforms state-of-the-art domain adaptation, image translation, and domain generalization methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call