Abstract

The conventional classification models implicitly assume that the distributions of data employed for training and test are identical. However, the assumption is rarely valid in many practical applications. In order to alleviate the difference between the distributions of the training and test sets, in this paper, we propose a regularized subspace learning framework based on the low-rank representation technique for unsupervised domain adaptation. Specifically, we introduce a regularization term of the subspace projection matrix to deal with the ill-conditioned problem and obtain a unique numerical solution. Meanwhile, we impose a structured sparsity-inducing regularizer on the error term so that the proposed method can filter out the outlier information, and therefore improve the performance. The extensive comparison experiments on benchmark data sets demonstrate the effectiveness of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call