Abstract

Domain adaptation is a field of machine learning that addresses the problem occurring when a classifier is trained and tested on domains from different distributions. This kind of paradigm is of vital importance as it allows a learner to generalize the knowledge across different tasks. In this paper, we present a new method for fully unsupervised domain adaptation that seeks to align two domains using a shared set of basis vectors derived from eigenvectors of each domain. We use non-negative matrix factorization (NMF) techniques to generate a non-negative embedding that minimizes the distance between projections of source and target data. We present a theoretical justification for our approach by showing the consistency of the similarity function defined using the obtained embedding. We also prove a theorem that relates the source and target domain errors using kernel embeddings of distribution functions. We validate our approach on benchmark data sets and show that it outperforms several state-of-art domain adaptation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call