Abstract

Deep learning-based methods are widely used for the task of semantic segmentation in recent years. However, due to the difficulty and labor cost of collecting pixel-level annotations, it is hard to acquire sufficient training images for a certain imaging modality, which greatly hinders the performance of these methods. The intuitive solution to this issue is to train a pre-trained model on label-rich imaging modality (source domain) and then apply the pre-trained model to the label-poor imaging modality (target domain). Unsurprisingly, since the severe domain shift between different modalities, the pre-trained model would perform poorly on the target imaging modality. To this end, we propose a novel unsupervised domain adaptation framework, called Joint Image and Feature Adaptive Attention-aware Networks (JIFAAN), to alleviate the domain shift for cross-modality semantic segmentation. The proposed framework mainly consists of two procedures. The first procedure is image adaptation, which transforms the source domain images into target-like images using the adversarial learning with cycle-consistency constraint. For further bridging the gap between transformed images and target domain images, the second procedure employs feature adaptation to extract the domain-invariant features and thus aligns the distribution in feature space. In particular, we introduce an attention module in the feature adaptation to focus on noteworthy regions and generate attention-aware results. Lastly, we combine two procedures in an end-to-end manner. Experiments on two cross-modality semantic segmentation datasets demonstrate the effectiveness of our proposed framework. Specifically, JIFAAN surpasses the cutting-edge domain adaptation methods and achieves the state-of-the-art performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.