Abstract

Recently, researchers have focused on utilizing given heterogeneous features to explore obvious discrimination information for clustering. Most of the current work exploits consistency using some fusion metrics, but the complementarity of multi-view features is not well leveraged. In this paper, we propose an efficient consistent contrastive representation network (CCR-Net) for multi-view clustering, which provides a generalized framework for multi-view learning tasks. First, the proposed model explores the complementarity by a designed contrastive fusion module to learn a shared fusion weight. Second, the proposed method utilizes a consistent representation module to ensure consistency and obtains a consistent graph. Furthermore, we also extend the proposed method to incomplete multi-view scenarios. The designed contrastive fusion module utilizes the complementarity of multiple views to fill in the missing view graphs. Moreover, the consistent feature representation module adds a maxpooling layer on CCR-Net to explore a shared local structure and extract a latent low-dimensional embedding. Finally, the proposed method presents end-to-end training and flexible task interfaces for multi-view learning. Comprehensive evaluations on challenging multi-view tasks demonstrate that the proposed method achieves outstanding performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call