Abstract

Transfer learning and domain adaptation are promising solutions to solve the problem that the training set (source domain) and the test set (target domain) follow different distributions. In this paper, we investigate the unsupervised domain adaptation in which the target samples are unlabeled whereas the source domain is fully labeled. We find distinct transformation matrices to transfer both the source and the target domains into the disjointed subspaces where the distribution of each target sample in the transformed space is similar to the source samples. Moreover, the marginal and conditional probability disparities are minimized across the transformed source and target domains via a non-parametric criterion, i.e., maximum mean discrepancy. Therefore, different classes in the source domain are discriminated using the between-class maximization and within-class minimization. In addition, the local information of the source and target data including geometrical structures of the data are preserved via sample labels. The performance of the proposed method is verified using various visual benchmarks experiments. The average accuracy of our proposed method on three standard benchmarks is 70.63%. We compared our method against other state-of-the-art domain adaptation methods where the results prove that it outperforms other domain adaptation methods with 22.9% improvement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call