Abstract

Multimedia applications often involve knowledge transfer across domains, e.g., from images to texts, where Unsupervised Domain Adaptation (UDA) can be used to reduce the domain shifts. Most of the UDA methods are based on adversarial learning. However, previous adversarial domain adaptation methods may suffer from three issues. First, although the features learned by previous methods could fool the domain classifier to make false classification predictions, they may not be domain-invariant. Second, the limited number of training samples make the latent space of features not smooth and continuous enough. Third, the target domain features may lack discriminability. In this paper, we propose a novel adversarial domain adaptation method named Adversarial Mixup Ratio Confusion (AMRC) to alleviate all the above issues. Specifically, we propose a new adversarial training pattern that uses mixup to generate multiple features with different mixup ratios, which represent different intermediate states between the source and target domain. Then, on one hand, we train an estimator to estimate the mixup ratio as accurately as possible. On the other hand, we train a generator to make the estimator be uncertain about the mixup ratio. In this way, our method could learn a continuous and domain-invariant latent space. Furthermore, we apply the intra-domain and cross-domain mixup regularizations to ensure the smoothness and continuity of the latent space, while making the classifier behave more linearly on in-between samples. At last, we exploit the sharpened pseudo-labels of the target samples for self-supervised learning to enhance the discriminability of the target features. The experimental results on 3 benchmarks verify the effectiveness of our method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.