Abstract
Domain adaptation (DA) algorithms address the problem of distribution shift between training and testing data. Recent approaches transform data into a shared subspace by minimizing the shift between their marginal distributions. We propose a method to learn a common subspace that will leverage the class conditional distributions of training samples along with reducing the marginal distribution shift. To learn the subspace, we employ a supervised technique based on non-parametric mutual information by inducing soft label assignment for the unlabeled test data. The approach presents an iterative linear transformation for subspace learning by repeatedly updating test data predictions via soft-labeling and consequently improving the subspace with maximization of mutual information. A set of comprehensive experiments on benchmark datasets is conducted to prove the efficacy of our novel framework over state-of-the-art approaches.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have