Abstract

Multi-view clustering aims to reveal the correlation between different input modalities in an unsupervised way. Similarity between data samples can be described by a similarity graph, which governs the quality of multi-view clustering. However, existing multi-view graph learning methods mainly construct similarity graph based on raw features, which are unreliable as real-world datasets usually contain noises, outliers, or even redundant information. In this paper, we formulate a novel model to simultaneously learn a robust structured similarity graph and perform multi-view clustering. The similarity graph is adaptively learned based on a latent representation that is invulnerable to noises and outliers. Furthermore, the similarity graph is enforced to contain a clear structure, i.e., the number of connected components of the target graph is exactly equal to the ground-truth class number. Consequently, the label to each data sample can be directly assigned without any postprocessing. As a result, our model aims at accomplishing three subtasks: latent representation extraction, similarity graph learning, and cluster label allocation, in a unified framework. These three subtasks are seamlessly integrated and can be mutually boosted by each other towards the overall optimal solution. An efficient alternation algorithm is proposed to solve the optimization problem. Experimental results on several benchmark datasets illustrate the effectiveness of the proposed model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call