Abstract

Multi-view subspace clustering has made remarkable achievements in the field of multi-view learning for high-dimensional data. However, many existing multi-view subspace clustering methods still have two disadvantages. First, most of them only recover the subspace structure from either consistent or specific perspective. Second, they often fail to take advantage of the high-order information among different views. To alleviate these two issues, this paper proposes a novel multi-view subspace clustering method, which aims to learn the view-specific representation as well as the low-rank tensor representation in a unified framework. Particularly, our method learns the view-specific representation from data samples by exploiting the local structure within each view. In the meantime, we generate the low-rank tensor representation from the view-specific representation to capture the high-order correlation across multiple views. Based on the joint representation learning framework, the proposed method is able to explore the intra-view pairwise information and the inter-view complementary information, so that the underlying data structure can be revealed and then the final clustering result can be obtained through the subsequent spectral clustering. Furthermore, in the proposed Joint Representation Learning for Multi-view Subspace Clustering (JRL-MSC) method, a unified objective function is formulated, which can be efficiently optimized by the alternating direction method of multipliers. Experimental results on multiple real-world data sets have demonstrated that our method outperforms the state-of-the-art counterparts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call