Abstract

Graph Neural Network(GNN) has achieved remarkable performance in classification tasks due to its strong distinctive power of different graph topologies. However, traditional GNNs face great limitations in link prediction tasks since they learn vertex embeddings from fixed input graphs thus the learned embeddings cannot reflect unobserved graph structures. Graph-learning based GNNs have shown better performance by collaborative learning of graph structures and vertex embeddings, but most of them rely on available initial features to refine graphs and almost perform graph learning at once. Recently, some methods utilizes contrastive learning to facilitate link prediction, but their graph augmentation strategies are predefined only on original graphs, and do not introduce unobserved edges into augmented graphs. To this end, a self-supervised reconstructed graph learning (SRGL) method is proposed. The key points of SRGL lie in two folds: Firstly, it generates augmented graphs for contrasting by learning reconstructed graphs and vertex embeddings from each other, which brings unobserved edges into augmented graphs. Secondly, it maximizes the mutual information between edge-level embeddings of reconstructed graphs and the graph-level embedding of an original graph, which guarantees learned reconstructed graphs to be relevant to the original graph.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.