Multi-view clustering aims to employ semantic information from multiple perspectives to accomplish the clustering task. However, a crucial concern in this domain is the selection of distinctive features. Most existing methods map data into a single feature space and then construct a similarity matrix, which often leads to an insufficient utilisation of intrinsic information in the data, meanwhile neglecting the impact of noise in the data, resulting in poor representation learning performance. Information bottleneck (IB) is a theoretical model based on information theory, the core idea of which is to extract information that is useful for a given task by selecting an appropriate representation and discarding redundant and irrelevant information. In this study, we propose an innovative IB fusion model for deep multi-view clustering (IBFDMVC), which operates on two distinct feature spaces and reconstructs semantic information in a parallel manner. IBFDMVC consists of three modules. The encoder module uses two linear encoding layers to learn and obtain embeddings with different dimensions. The fusion module adopts a collaborative training learning concept, where contrastive learning is first employed to enhance representation and IB theory is further used to reduce representation noise. Finally, clustering is performed using k-means in the clustering module. Compared with state-of-the-art multi-view clustering methods, IBFDMVC achieves better results, verifying the significant role of IB theory in providing a robust framework for feature selection and semantic information extraction in multi-view data analysis.
Read full abstract