Abstract

Deep multi-view clustering utilizes neural networks to extract the potential peculiarities of complementarity and consistency information among multi-view features. This can obtain a consistent representation that improves clustering performance. Although a multitude of deep multi-view clustering approaches have been proposed, most lack theoretic interpretability while maintaining the advantages of good performance. In this paper, we propose an effective differentiable network with alternating iterative optimization for multi-view co-clustering termed differentiable bi-sparse multi-view co-clustering (DBMC) and an extension named elevated DBMC (EDBMC). The proposed methods are transformed into equivalent deep networks based on the constructed objective loss functions. They have the advantages of strong interpretability of the classical machine learning methods and the superior performance of deep networks. Moreover, DBMC and EDBMC can learn a joint and consistent collaborative representation from multi-source features and guarantee sparsity between multi-view feature space and single-view sample space. Meanwhile, they can be converted into deep differentiable network frameworks with block-wise iterative training. Correspondingly, we design two three-step iterative differentiable networks to resolve resultant optimization problems with theoretically guaranteed convergence. Extensive experiments on six multi-view benchmark datasets demonstrate that the proposed frameworks outperform other state-of-the-art multi-view clustering methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call