Abstract

With the rapid development of vision-based deep learning (DL), it is an effective method to generate large-scale synthetic data to supplement real data to train the DL models for domain adaptation. However, previous vanilla domain adaptation methods generally assume the same label space, and such an assumption is no longer valid for a more realistic scenario where it requires adaptation from a larger and more diverse source domain to a smaller target domain with less number of classes. To handle this problem, we propose an attention-based adversarial partial domain adaptation (AADA). Specifically, we leverage adversarial domain adaptation to augment the target domain by using source domain, then we can readily turn this task into a vanilla domain adaptation. Meanwhile, to accurately focus on the transferable features, we apply attention-based method to train the adversarial networks to obtain better transferable semantic features. Experiments on four benchmarks demonstrate that the proposed method outperforms existing methods by a large margin, especially on the tough domain adaptation tasks, e.g. VisDA-2017.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call