Abstract

Graph Neural Networks (GNNs) have achieved remarkable successes in various graph-based learning tasks, thanks to their ability to leverage advanced GPUs. However, GNNs currently face challenges arising from the concurrent use of advanced Tensor Cores (TCs) and CUDA Cores (CDs) in GPUs. These challenges are further exacerbated due to repeated, inefficient, and redundant aggregations in GNN that result from the high sparsity and irregular non-zero distribution of real-world graphs. We propose RT-GNN, a GNN framework based on the fusion of advanced TC and CD units, to eliminate the aforementioned redundancies by exploiting the properties of an adjacency matrix. First, a novel GNN representation technique, hierarchical embedding graph (HEG) is proposed to manage the intermediate aggregation results hierarchically, which can further avoid redundancy in intermediate aggregations elegantly. Next, to address the inherent sparsity of graphs, RT-GNN places the blocks (a.k.a tiles) in HEG onto TCs and CDs according to their sparsity by a new block-based row-wise multiplication approach, which assembles TCs and CDs to work concurrently. Experimental results demonstrate that HEG outperforms HAG by an average speedup of 19.3 × for redundancy elimination performance, especially up to 72 × speedup on the dataset of ARXIV. Moreover, for overall performance, RT-GNN outperforms state-of-the-art GNN frameworks (including DGL, HAG, GNNAdvisor, and TC-GNN) by an average factor of 3.1 × while maintaining or even improving the task accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.