Abstract

Unsupervised graph learning aims to learn an encoder that embeds high-dimensional nodes into compact continuous vectors and preserves the topological and semantic features simultaneously without using any label information. Recently, contrastive learning (CL) on graph learning revives the traditional InfoMax principle and generates two views of the input graph randomly and then maximizes the agreements between them. However, the stochastic augmentation of a graph leads to two problems that need to be solved. Firstly, it ignores the role of some essential nodes and discriminating feature dimensions on the graph and may decrease the informativeness of the generated view by removing these crucial edges. Secondly, there are multi-level substructures of a graph that can be exploited and utilized for the network encoder’s topological learning. This paper proposes Cluster-Aware Multiplex InfoMax (CAMI) for unsupervised graph representation learning. We apply an adaptive graph augmentation scheme on both topological and feature dimensions to generate graph views without damaging the vital graph structure. To encourage the network encoder to capture more underlying node interactions, we additionally increase a mutual information maximization constraint between the node’s representation and multi-level graph summaries. Extensive experimental results on seven realistic datasets with different tasks prove the CAMI framework’s effectiveness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.