Abstract

Domain adaptation (DA) aims to eliminate the difference between the distribution of labeled source domain on which a classifier is trained and that of unlabeled or partly labeled target domain to which the classifier is to be applied. Compared with the semi-supervised domain adaptation where some labeled data from target domain is utilized to help train the classifier, the unsupervised domain adaptation where no labels can be seen from the target domain is without doubt more challenging. Most published approaches suffer from high complexity of designment or implementation. In this paper, we propose a simple method for unsupervised domain adaptation which minimizes domain shift by projecting each instance from source and target domains into a common feature space using a linear kernel function. Our method is extremely simple without hyper-parameters (it can be implemented in two lines of Matlab code) but still outperforms the state-of-the-art domain adaptation approaches on standard benchmark datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call