Abstract
Domain adaptation is an important task for medical image analysis to improve generalization on datasets collected from diverse institutes using different scanners and protocols. For images with visible domain shift, using image translation models is an intuitive and effective way to perform domain adaptation, but the structure of the generated image may often be distorted when large content discrepancies between domains exist; resulting in poor downstream task performance. To address this, we propose a novel image translation model that disentangles structure and texture to only transfer the latter by using mutual information and texture co-occurrence losses. We translate source domain images to the target domain and employ the generated results as augmented samples for domain adaptation segmentation training. We evaluate our method on three public segmentation datasets: MMWHS, Fundus, and Prostate datasets acquired from diverse institutes. Experimental results show that a segmentation model trained using the augmented images from our approach outperforms state-of-the-art domain adaptation, image translation, and domain generalization methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.