Abstract

AbstractModern recommendation systems integrate graph convolution neural networks (GCN) for enhancing embedding representation. Compared with widely deployed neural network-based models, the extra message propagation layer of GCN-based recommendation is featured with extensive computations and irregular memory access. However, architecture designs for prevailing deep neural network recommendation models assume simple pooling in the embedding layer. ReRAM-based GCN accelerators are specialized for graph-related operations. However, they are designed for general graphs, while GCN-based recommendation models mainly operate on the user-item graph. In this paper, we proposed a resistive random accessed memory (ReRAM) based processing-in-memory (PIM) accelerator, ReGCNR, for GCN-based recommendation. ReGCNR is featured with three key innovations. First, we exploit the 3-dimensional (3-D) stacked heterogeneous ReRAM to fit with the large-size embedding table and user-item graph. Then, we propose a joint degree mapping schema that maximizes the efficiency of the execution pipeline. After that, ReGCNR assembles a well-coordinated pipeline and hardware scheduling design to boost overall system performance. Results show that ReGCNR outperforms GPU by 69.83$$\times$$ × and 56.67$$\times$$ × in terms of average speedup and energy saving, respectively. In addition, ReGCNR outperforms state-of-the-art ReRAM-based solutions by 11.13$$\times$$ × speedups and 7.22$$\times$$ × energy savings on average.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.