Abstract
Recently, adversarial learning has dominated domain adaptation, which is a popular branch of transfer learning. The basic idea of adversarial domain adaptation networks (ADAN) is to learn domain-invariant features which is able to confuse the domain discriminator. By sharing the spirit of generative adversarial networks (GANs), ADAN achieved state-of-the-art performance. However, ADAN also inherits the drawbacks of GANs. One of the most critical issues of GANs is that the learned distribution may be far from the expected one even if the training is successful, which is known as the generalization issue in GANs. As a result, it is no guarantee that the learned representations are domain-invariant even if the domain discriminator is successfully confused. To address this, we propose a new domain adaptation approach under the framework of ADAN. Specifically, we reformulate the conventional ADAN and propose ADAN plus metric protocol under the new ADAN formulation, ADANM for short, which leverages both adversarial learning and metric learning. The proposed method, on one hand, challenges the generalization issue in previous ADAN approaches. On the other hand, it guarantees that domain divergence is minimized during the adversarial training. Extensive experiments on three public benchmarks verify that the proposed protocol is favorable for unsupervised domain adaptation tasks.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have