Abstract

Multi-view feature fusion is a vital phase for multi-view representation learning. Recently, most Graph Auto-Encoders (GAEs) and their variants focus on multi-view learning. However, most of them ignore deep representation fusion of features of each multi-view. Furthermore, there are scarcely unsupervised constraints guiding to enhance the graph representation capability in training process. In this paper, we propose a novel unsupervised Multi-view Deep Graph Representation Learning (MDGRL) framework on multi-view data which is based on the Graph Auto-Encoders (GAEs) for local feature leaning, a feature fusion module for producing global representation and a valid variant of Variational Graph Auto-Encoder (VGAE) for global deep graph representation learning. To fuse Nearest Neighbor Constraint (NNC) between the maximal degree nodes which represents the most close joining node and their adjacent nodes into VGAE, we propose a new Nearest Neighbor Constraint Variational Graph Auto-Encoder (NNC-VGAE) to enhance the global deep graph representation capability for multi-view data. In the training process of NNC-VGAE, NNC makes the adjacent nodes gradually close to the maximal degree node. Hence, the proposed MDGRL has excellent deep graph representation capability for multi-view data. Experiments on eight non-medical benchmark multi-view data sets and four medical data sets confirm the effectiveness of our MDGRL compared with other state-of-the-art methods for unsupervised clustering.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call