Abstract

Domain adaptation algorithms leverage the knowledge from a well-labeled source domain to facilitate the learning of an unlabeled target domain, in which the source domain and the target domain are related but drawn from different data distributions. Existing domain adaptation approaches are either trying to explicitly mitigate the data distribution gaps by minimizing some distance metrics, or attempting to learn a new feature representation by revealing the shared factors and use the learned representation as a bridge of knowledge transfer. Recently, several researchers claim that jointly optimizing the distribution gaps and latent factors can learn a better transfer model. In this paper, therefore, we propose a novel approach which simultaneously mitigates the data distribution and learns a feature representation via a common objective. Specifically, we present joint metric and feature representation learning (JMFL) for unsupervised domain adaptation. JMFL, on the one hand, minimizes the domain discrepancy between the source domain and the target domain. On the other hand, JMFL reveals the shared underlying factors between the two domains to learn a new feature representation. We smoothly incorporate the two aspects into a unified objective and present a detailed optimization method. Extensive experiments on several open benchmarks verify that our approach achieves state-of-the-art results with significant improvements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call