Abstract

Domain adversarial training is a popular approach for Unsupervised Domain Adaptation (DA). However, the transferability of adversarial training framework may drop greatly on the adaptation tasks with a large distribution divergence between source and target domains. In this paper, we propose a new approach termed Adversarial Mixup Synthesis Training (AMST) to alleviate the issue. The AMST augments the training with synthesis samples by linearly interpolating between pairs of hidden representations and their domain labels. By this means, AMST encourages the model to make consistency domain prediction less confidently on interpolations points, which learn domain-specific representations with fewer directions of variance. Based on the previous work, we conduct a theoretical analysis on this phenomenon under ideal conditions and show that AMST could improve generalization ability. Finally, experiments on benchmark dataset demonstrate the effectiveness and practicability of AMST. We will publicly release our code on github soon.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.