Abstract

Data annotation is always an expensive and time-consuming issue for deep learning based medical image analysis. To ease the need of annotations, domain adaptation is recently introduced to generalize neural networks from a labeled source domain to unlabeled target domain without much performance degradation. In this paper, we propose a novel target domain self-supervision for domain adaptation by constructing an edge generation auxiliary task to assist primary segmentation task so as to extract better target representation and improve target segmentation performance. Besides, in order to leverage detailed information contained in low-level features, we propose a hierarchical low-level adversarial learning mechanism to encourage low-level features domain uninformative in a hierarchical way, so that the segmentation performance can benefit from low-level features without being affected by domain shift. Following these two proposed approach, we develop a cross-modality domain adaptation framework which employs the dual-task collaboration for target domain self-supervision, and encourages low-level detailed features domain uninformative for better alignment. Our proposed framework achieves state-of-the-art results on public cross-modality segmentation datasets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.