Abstract

Graph neural networks (GNNs) have shown great success in various applications. As real-world graphs are large, training GNNs in distributed systems is desirable. In current training schemes, their edge partitioning strategies have a strong impact on the performance of GNNs for the unbalanced influence of high-degree nodes and the damaged neighbor integrity of low-degree nodes. Meanwhile, a lack of reconciliations of different local models leads to converging up and down across workers. In this work, we design DEPR, a suitable framework for distributed GNN training. We propose a degree-sensitive edge partitioning with influence-balancing and locality-preserving to adapt distributed GNNs training by following an owner-compute rule (each partition performs all the computations involving data that it owns). And then knowledge distillation and contrastive learning are used to reconcile the fusion of local models and boost convergence. We show in extensive empirical experiments on the node classification task of three large-scale graph datasets (Reddit, Amazon, and OGB-Products) that DEPR achieves 2x speedup of convergence and get absolute up 3.97 performance improvement of F1-micro score compared to DistDGL.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.