Abstract

Unsupervised transfer learning has attracted a lot of attention in the big data era, due to its capability of extracting knowledge from large-scale unlabeled samples in multiple data domains. Existing unsupervised transfer learning methods mainly focus on learning a common latent space for source and target domains, while the data representation and subspace structure in target domain are usually ignored. In this paper, we develop an Unsupervised Transfer learning approach based on Low-Rank Coding (UTLRC), in order to take advantages of the high-level structural information in the target domain. A dictionary is shared by samples from source domain and target domain, which contains bases vectors for low-level visual patterns. By utilizing the cross-domain dictionary, UTLRC is able to effectively encode the samples in target domain. In addition, a low-rank constraint is incorporated to model the subspace structure in target domain, and a sparse constraint is introduced in the source domain. We formulate UTLRC as a rank-minimization problem, and design an effective optimization algorithm based on the alternating direction method of multipliers (ADMM) to solve it. Existing evidence already indicates the connections between neural mechanisms and sparse coding, the low-rank coding strategy studied in this work would further inspire the understanding of the neural mechanisms of cognition. We apply UTLRC to image clustering, and evaluate its performance on several benchmark datasets. Extensive experimental results demonstrate the effectiveness of UTLRC, compared to some representative subspace clustering methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call