Abstract

Multi-view data clustering is essential for discovering patterns and exploiting information from different sources. In this context, we propose DeConFCluster, an unsupervised multi-view clustering fusion framework based on Deep Convolutional Transform Learning (CTL). Our approach has the advantage that it does not require an additional decoder network during the training phase. This makes our model less prone to overfitting in data-constrained scenarios, as opposed to several recent studies based on the encoder–decoder framework. Furthermore, our method incorporates a loss function inspired by K-Means, which enables it to learn more effective representations for the clustering task. Finally, we evaluate our framework on five standard multi-view clustering datasets, and show that it outperforms the state-of-the-art multi-view deep clustering techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call