Abstract

Graph Convolutional Network (GCN) models have attracted attention given their high accuracy in interpreting graph data. One of the primary building blocks of a GCN model is aggregation, which gathers and averages the feature vectors corresponding to the vertices adjacent to each individual vertex. Aggregation works by multiplying the adjacency and feature matrices. The size of both matrices exceeds the on-chip cache capacity, and the adjacency matrix is highly sparse. These lead to little data reuse and cause numerous main-memory accesses during the aggregation process. Thus, aggregation exhibits memory-intensive characteristics. We propose GraNDe, an NDP architecture that accelerates memory-intensive aggregation operations by locating processing elements near the DRAM datapath to exploit rank-level parallelism. By exploring the data mapping of the operand matrices to DRAM ranks, we discover that the optimal mapping differs depending on the configuration of a specific GCN layer. With our optimal layer-by-layer mapping scheme, GraNDe shows a speedup up to 4.3× compared to the baseline system on open-graph benchmark datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call