Abstract

Multi-view clustering endeavors to effectively uncover consistent clustering patterns across multiple data sources or feature spaces. This field grapples with two key challenges: (1) the effective integration and utilization of consistency and complementarity information from diverse view spaces, and (2) the capturing of structural correlations between data samples in the multi-view context. To address these challenges, this paper proposes the Multi-view contrAstive clustering with Graph Aggregation and confidence enhancement (MAGA) algorithm. Specifically, we employ a deep autoencoder network to learn embedded features for each independent view. To harness consistency and complementarity information, we introduce the Simple Cross-view Spectral Graph Aggregation module. This module utilizes graph convolutional layers to generate view-specific graph embeddings and subsequently aggregates these embeddings from different views into a unified feature space using a cross-view self-attention mechanism. To capture both inter-view and intra-view structural correlations among different samples, we propose a dual representation contrastive learning mechanism, which operates concurrently at both the instance and feature levels. Additionally, we introduce the maximizing cluster assignment confidence mechanism to obtain more compact clustering assignments. As a result, MAGA outperforms 20 competitive methods across nine benchmark datasets, showcasing its superior performance. Code: https://github.com/BJT-bjt/MAGA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call