Abstract

Information theory has shown a notable performance in the field of computer vision (CV) and natural language processing (NLP), therefore, many works start to learn better node-level and graph-level representations in the information theory perspective. Previous works have shown great performance by maximizing the mutual information between graph and node representation to capture graph information. However, a simple mixture of information in a single node representation leads to a lack of information related to the graph structure, which leads to the information gap between model and theoretical optimal solution. To solve this problem, we propose to replace the node representation with subgraph representation to reduce the information gap between the model and the optimal case. And to capture enough information of original graph, three operators (information aggregators): attribute-conv, layer-conv and subgraph-conv, are designed to gather information from different aspects, respectively. Moreover, to generate more expressive subgraphs, we propose a universal framework to generate subgraphs autoregressively, which provides a comprehensive understanding of the graph structure in a learnable way. We proposed a Head–Tail negative sampling method to provide more negative samples for more efficient and effective contrastive learning. Moreover, all these components can be plugged into any existed Graph Neural Networks. Experimentally, we achieve new state-of-the-art results in several benchmarks under the unsupervised case. We also evaluate our model on semi-supervised learning tasks and make a fair comparison to state-of-the-art semi-supervised methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.