Abstract

Most existing clustering methods employ the original multi-view data as input to learn the similarity matrix which characterizes the underlying cluster structure shared by multiple views. This reduces the flexibility of multi-view clustering methods due to the fact that multi-view data usually contains noise or the variation between multi-view data points, which should belong to the same cluster, is larger than the variation between data points belonging to different clusters. To address these problems, we propose a novel multi-view clustering model, namely adaptive latent similarity learning (ALSL) for multi-view clustering. ALSL employs the adaptively learned graph, which characterizes the relationship between clusters, as the new input to learn the latent data representation and integrates the latent similarity representation learning, manifold learning and spectral clustering into a unified framework. With the complementarity of multiple views, the latent similarity representation characterizes the underlying cluster structure shared by multiple views. Our model is intuitive and can be optimized efficiently by using the Augmented Lagrangian Multiplier with Alternating Direction Minimization (ALM-ADM) algorithm. Extensive experiments on benchmark datasets have demonstrated the superiority of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call