Abstract

Unsupervised domain adaptation (UDA) carries out knowledge transfer from the labeled source domain to the unlabeled target domain. Existing feature alignment methods in UDA semantic segmentation achieve this goal by aligning the feature distribution between domains. However, these feature alignment methods ignore the domain-specific knowledge of the target domain. In consequence, 1) the correlation among pixels of the target domain is not explored; and 2) the classifier is not explicitly designed for the target domain distribution. To conquer these obstacles, we propose a novel cluster alignment framework, which mines the domain-specific knowledge when performing the alignment. Specifically, we design a multi-prototype clustering strategy to make the pixel features within the same class tightly distributed for the target domain. Subsequently, a contrastive strategy is developed to align the distributions between domains, with the clustered structure maintained. After that, a novel affinity-based normalized cut loss is devised to learn task-specific decision boundaries. Our method enhances the model's adaptability in the target domain, and can be used as a pre-adaptation for self-training to boost its performance. Sufficient experiments prove the effectiveness of our method against existing state-of-the-art methods on representative UDA benchmarks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.