Abstract

Domain adaptation aims at learning a predictive model that can generalize to a new target domain different from the source (training) domain. To mitigate the domain gap, adversarial training has been developed to learn domain invariant representations. State-of-the-art methods further make use of pseudo labels generated by the source domain classifier to match conditional feature distributions between the source and target domains. However, if the target domain is more complex than the source domain, the pseudo labels are unreliable to characterize the class-conditional structure of the target domain data, undermining prediction performance. To resolve this issue, we propose a Pairwise Similarity Regularization (PSR) approach that exploits cluster structures of the target domain data and minimizes the divergence between the pairwise similarity of clustering partition and that of pseudo predictions. Therefore, PSR guarantees that two target instances in the same cluster have the same class prediction and thus eliminate the negative effect of unreliable pseudo labels. Extensive experimental results show that our PSR method significantly boosts the current adversarial domain adaptation methods by a large margin on four visual benchmarks. In particular, PSR achieves a remarkable improvement of more than 5% over the state-of-the-art on several hard-to-transfer tasks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.