Abstract

Multi-view clustering aims to partition the data into their underlying clusters via leveraging multiple views information. To exploit cross-view information, existed approaches in tensor-based subspace learning attract much attention. In order to explore essential tensor, the most recent work mainly focuses on capturing representation tensor with sparse and low-rank constraints. However, one shortcoming is that this process may suffer from instability since it did not consider retaining local structure between samples. To tackle the issue, we introduce a novel self-expressive tensor learning method considering both global and local constraints to promote the learning of representation tensor. In particular, we construct a tensor-based subspace representation that joint low-rank and graph-regularized tensor learning to a united optimization problem. The essential global structure and high-order correlations can be naturally captured through low-rank self-expressive tensor learning. Meanwhile, the local structures can be preserved by introducing graph regularized terms on representation tensor, thus bring benefits to subsequent clustering task. An effective optimization procedure for solving the proposed model is presented. We conduct extensive experiments on text, object, and gene expression datasets. The experimental results well demonstrate that the proposed method, named by TLGRL, achieves superiority over benchmark methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call