Abstract

Multi-view subspace clustering aims to find the inherent structure of data as much as possible by fusing complementary information of multiple views to achieve better clustering results. However, most of the traditional multi-view subspace clustering algorithms are only shallow clustering algorithms, which does not capture the deep information of the data well, and does not conduct in-depth research at the self-representation level of the data. To this end, this paper proposes a novel deep multi-view subspace clustering model that introduces exclusive constraints. A deep autoencoder is used to perform nonlinear low-dimensional subspace mapping for each view to learn the deep structure of the original data. To better retain multiple views’ local structure and better reflect the complementarity, the exclusive constraints are introduced into the self-representation matrix which located in the middle layer of the deep autoencoder. The multi-view consensus self-representation matrix is used to capture the consistency information between the multi-view data. The update of autoencoder parameters and clustering parameters are iteratively optimized under the same learning framework to improve the clustering performance. Experiments on multi-view data sets prove that this method can better dig out the inherent complementary structure of multi-view data, which reflects the superiority of this method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call