In this paper, we address the multiview nonlinear subspace representation problem. Traditional multiview subspace learning methods assume that the heterogeneous features of the data usually lie within the union of multiple linear subspaces. However, instead of linear subspaces, the data feature actually resides in multiple nonlinear subspaces in many real-world applications, resulting in unsatisfactory clustering performance. To overcome this, we propose a hyper-Laplacian regularized multilinear multiview self-representation model, which is referred to as HLR-M2VS, to jointly learn multiple views correlation and a local geometrical structure in a unified tensor space and view-specific self-representation feature spaces, respectively. In unified tensor space, a well-founded tensor low-rank regularization is adopted to impose on the self-representation coefficient tensor to ensure global consensus among different views. In view-specific feature space, hypergraph-induced hyper-Laplacian regularization is utilized to preserve the local geometrical structure embedded in a high-dimensional ambient space. An efficient algorithm is then derived to solve the optimization problem of the established model with theoretical convergence guarantee. Furthermore, the proposed model can be extended to semisupervised classification without introducing any additional parameters. An extensive experiment of our method is conducted on many challenging datasets, where a clear advance over state-of-the-art multiview clustering and multiview semisupervised classification approaches is achieved.
Read full abstract