Abstract

Graph neural networks (GNNs), which extend conventional deep learning technologies to process graph-structured data, have shown its powerful graph representation learning ability. Existing typical GNNs utilize neighborhood message passing mechanism based on neural networks that updates target vertex representations by aggregating feature messages from neighboring source vertices. To accelerate the computations of GNNs, some customized accelerators, which follow the neighborhood aggregation computation pattern for each vertex, have been proposed. Through analysis, we observe that a naive implementation of the neighborhood aggregation results in redundant computations and communications.In this paper, we propose a novel redundancy-eliminated GNN accelerator, shortly termed as ReGNN. ReGNN is supported by an algorithm and architecture co-design. We first propose a dynamic redundancy-eliminated neighborhood message passing algorithm for GNNs. Then a novel architecture is designed to support the proposed algorithm and transform the redundancy elimination into performance improvement. ReGNN is also a configurable pipelined architecture that can be configured to support different GNN variants. In terms of the same computations, ReGNN provides the same accuracy as traditional GNNs. To the best of our knowledge, ReGNN is the first accelerator that can eliminate computation redundancy in GNNs. Our proposed ReGNN system gains an average of 9.1× speedup and 8.9× energy efficiency over state-of-the-art GNN accelerators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call