Abstract

Graph Attention (GA) which aims to learn the attention coefficients for graph edges has achieved impressive performance in GNNs on many graph learning tasks. However, existing GAs are usually learned based on edges' (or connected nodes') features which fail to fully capture the rich structural information of edges. Some recent research attempts to incorporate the structural information into GA learning but how to fully exploit them in GA learning is still a challenging problem. To address this challenge, in this work, we propose to leverage a new Replicator Dynamics model for graph attention learning, termed Graph Replicator Attention (GRA). The core of GRA is our derivation of replicator dynamics based sparse attention diffusion which can explicitly learn context-aware and sparse preserved graph attentions via a simple self-supervised way. Moreover, GRA can be theoretically explained from an energy minimization model. This provides a more theoretical justification for the proposed GRA method. Experiments on several graph learning tasks demonstrate the effectiveness and advantages of the proposed GRA method on ten benchmark datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call