Abstract

Domain adaptation is critical for solving the problem in new, unseen environments. Recent works have shown that adversarial domain adaptation achieves state-of-the-art performance. However, existing adversarial domain adaptation networks usually pursue extracting the domain-invariant features to achieve adaptation, which breaks the discriminative structures of the model and leads to negative transfer. To this end, we propose a novel approach named Learning a Weighted Classifier for Conditional Domain Adaptation (LWC). Specifically, we propose a novel mechanism to quantify the transferability of each sample. We observe that it is hard for domain discriminator to distinguish similar samples, indicating a higher entropy of its output. And for dissimilar samples, the entropy of domain discriminator’s output is lower. By leveraging domain entropy of conditioned domain discriminator, we adopt matched strategies for both similar samples and dissimilar samples, which learns a more distinct boundary on similar samples and alleviates the effect of dissimilar samples. Extensive experiments on open benchmarks verify that our model is able to outperform previous methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call