Abstract

Unsupervised domain adaptation (UDA) algorithms aim to transfer the knowledge learned from the labeled source domain to the unlabeled target domain. To tackle this issue, domain-alignment models based on different representation space levels were used to extract knowledge relevant to the ultimate task. Although they achieved remarkable performance, domain shift that poses an evident challenge for adapting the model trained on one domain to another one was ignored. Pseudo labels learning for UDA is therefore proposed to reduce the impact of domain shifts. However, the inevitable label noise severely weakens the model’s capability of feature representations. Inspired by recent studies on deep learning with noisy labels, we propose a Mutual Constraint Network for multi-level UDA (M2N), to mitigate the effects of noisy pseudo labels and learn better feature representation from different space levels. Extensive experimental results on Digits, Office-31, and VisDA-2017, show that our methods can achieve new state-of-the-art performance on three benchmark tasks and improve significantly over prior single-level UDA.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.