Abstract

Recently, graph neural networks (GNNs) have achieved great success in dealing with graph-based data. The basic idea of GNNs is iteratively aggregating the information from neighbors, which is a special form of Laplacian smoothing. However, most of GNNs fall into the over-smoothing problem, i.e., when the model goes deeper, the learned representations become indistinguishable. This reflects the inability of the current GNNs to explore the global graph structure. In this paper, we propose a novel graph neural network to address this problem. A rejection mechanism is designed to address the over-smoothing problem, and a dilated graph convolution kernel is presented to capture the high-level graph structure. A number of experimental results demonstrate that the proposed model outperforms the state-of-the-art GNNs, and can effectively overcome the over-smoothing problem.

Highlights

  • Graph structure data is ubiquitous in the real world, such as social networks [1,2,3,4,5], citation networks [6,7,8], wireless sensor networks [9], and graph-based molecules [10,11]

  • We focus on dealing with the limitations of current graph neural networks (GNNs), i.e., the need and the bottleneck both introduced by the global information

  • We propose a rejection mechanism, which softly rejects information from distant nodes, and allows our model to be free from the over-smoothing problem

Read more

Summary

Introduction

Graph structure data is ubiquitous in the real world, such as social networks [1,2,3,4,5], citation networks [6,7,8], wireless sensor networks [9], and graph-based molecules [10,11]. Graph neural networks (GNNs) have aroused a surge of research interest. The goal of GNNs is to learn representation vectors of nodes in a graph, and the learned vectors can be used in many graph-based applications, such as link prediction and node classification [12,13,14,15,16]. The general idea of GNNs is “message propagation”, i.e., each node iteratively passes, transforms, and aggregates messages (i.e., features) from its neighbors. After k iterations, each node can capture the information of its neighbor nodes within k-hops. There are many works in developing graph neural networks. GraphSAGE [6] proposes several types of aggregation strategies to propagate messages from neighbors effectively. GAT [10] adopts a self-attention [19] to dynamically propagate messages

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call