Abstract

On large-scale graphs, many graph neural networks are problematic in capturing long-range dependencies due to the oversmoothing problem. Recently, Graph Equilibrium Models (GEQs) arise as a promising solution to this issue. Their output is the equilibrium of a fixed-point equation, which can be seen as the result of iterating a GNN layer for infinite times, so that they inherently have global receptive fields. However, to find the equilibrium, GEQs require running costly full-batch root-finding algorithms from scratch during each model update, which leads to severe efficiency and scalability issues that prevent them from scaling to large graphs. To address these limitations, we propose VEQ, an efficient learning method to scale GEQs to large graphs. Instead of initializing the equilibrium from scratch in full-batch training, VEQ uses the latest equilibrium of in-batch nodes and their 1-hop neighbors (dubbed Virtual Equilibrium) to accelerate and calibrate the root-finding process in mini-batch training. With virtual equilibrium as an informative prior, VEQ is able to reach the equilibrium in fewer steps while still capturing global dependencies. Theoretically, we provide convergence analysis for the forward and backward pass of VEQ. Empirically, VEQ significantly outperforms existing GEQs by a large margin (more than 1.5%) on all benchmark datasets, with much less training time and memory. Also, VEQ achieves competitive and even superior performance to many highly engineered explicit GNNs on large-scale benchmark datasets like ogbn-arxiv and ogbn-products. VEQ shows that after we resolve the efficiency and scalability issues, GEQs are indeed favorable on large graphs due to their advantage of capturing long-range dependencies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.