Abstract

Artificial intelligence for graph‐structured data has achieved remarkable success in applications such as recommendation systems, social networks, drug discovery, and circuit annotation. Graph convolutional networks (GCNs) are an effective way to learn representations of various graphs. The increasing size and complexity of graphs call for in‐memory computing (IMC) accelerators for GCN to alleviate massive data transmission between off‐chip memory and processing units. However, GCN implementation with IMC is challenging because of the large memory consumption, irregular memory access, and device nonidealities. Herein, a fully binarized GCN (BGCN) accelerator based on computational resistive random‐access memory (RRAM) through software–hardware codesign is presented. The essential operations including aggregation and combination in GCN are implemented on the RRAM crossbar arrays with cooperation between multiply‐and‐accumulation and content‐addressable memory operations. By leveraging the model quantization and IMC on the RRAM, the BGCN accelerator demonstrates less RRAM usage, high robustness to the device variations, high energy efficiency, and comparable classification accuracy compared to the current state‐of‐the‐art GCN accelerators on both graph classification task using the MUTAG and PTC datasets and node classification task using the Cora and CiteSeer datasets. These results provide a promising approach for edge intelligent systems to efficiently process graph‐structured data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call