Abstract

Unsupervised domain adaptation is intended to respond to a challenging scenario where the training and testing samples are related yet distributed differently. A common technique to realize the alignment of distributions is feature transformation, which is gaining increasing attention in the transfer learning community. Many of existing studies concentrate on learning domain-invariant feature representations in order to transfer classifiers from the source to the target domain. However, the class discriminability of features is also crucial for the cross-domain learning, which could lead to decreased performance if ignored. To extract both the transferable and discriminative feature representation, a novel unsupervised domain adaptation method termed re-weighted transfer subspace learning with inter-class sparsity (ICS-RTSL) is proposed. Central to the proposed approach is the introduction of a class-wise sparsity regularization that aims to reduce the intra-class distances by maintaining the structural consistency of the source samples, thus making it possible to learn a more discriminative representation. In addition, a residual term through the least absolute criterion is constructed to mitigate the negative impact possibly brought by outliers. This is followed by the proposal of an efficient algorithm with iteratively re-weighted least squares to optimize the learning model. Supported by a set of comparative experiments, the effectiveness of the proposed ICS-RTSL is demonstrated through cross-domain image recognition tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call