Abstract

With the increasing adoption of graph neural networks (GNNs) in the graph-based deep learning community, various graph programming frameworks and models have been developed to improve the productivity of GNNs. The current GNN frameworks choose GPU as an essential tool to accelerate GNN training. However, it is still challenging to train GNNs on large graphs with limited GPU memory. Unlike traditional neural networks, generating mini-batch data by sampling in GNNs requires some complicated tasks such as traversing the graph to select neighboring nodes and gathering their features. This process takes up most of the training and we find the main bottleneck comes from transferring nodes features from CPU to GPU through limited bandwidth. In this paper, We propose a method Reusing Batch Data for the problem of data transmission. This method utilizes the similarity between adjacent mini-batches to reduce repeated data transmission from CPU to GPU. Furthermore, to reduce the overhead introduced by this method, we design a fast algorithm based on GPU to detect repeated nodes’ data and achieve shorter additional computation time. Evaluations on three representative GNN models show that our method can reduce transmission time by up to 60% and speed the end-to-end GNN training by up to 1.79× over the state-ofthe-art baselines. Besides, Reusing Batch Data can effectively save GPU memory footprint by about 19% to 40% while still reducing the training time compared to the static cache strategy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call