Abstract
As data movement operations and power-budget become key bottlenecks in the design of computing systems, the interest in unconventional approaches such as processing-in-memory (PIM), machine learning (ML), and especially neural network (NN)-based accelerators has grown significantly. Resistive random access memory (ReRAM) is a promising technology for efficiently architecting PIM- and NN-based accelerators due to its capabilities to work as both: High-density/low-energy storage and in-memory computation/search engine. In this paper, we present a survey of techniques for designing ReRAM-based PIM and NN architectures. By classifying the techniques based on key parameters, we underscore their similarities and differences. This paper will be valuable for computer architects, chip designers and researchers in the area of machine learning.
Highlights
In recent years, the interest in machine learning and especially neural-network based techniques has grown significantly
Since the CONV operation in convolutional neural networks (CNNs) involves matrix-vector multiplication (MVM), and CONV layers account for more than 95% of computations in CNNs, resistive RAM (ReRAM)-based processing engine can boost the efficiency of CNNs significantly [12,13]
Neural network is an machine learning (ML) approach modeled after biological nerve system, which predicts the output by computing a non-linear function on the weighted sum of the inputs
Summary
The interest in machine learning and especially neural-network based techniques has grown significantly. Since the CONV (convolution) operation in convolutional neural networks (CNNs) involves MVM, and CONV layers account for more than 95% of computations in CNNs, ReRAM-based processing engine can boost the efficiency of CNNs significantly [12,13] These factors have motivated researchers to implement a variety of ML/NN architectures on ReRAM, such as multi-layer perceptron [14,15], CNN [16,17,18], tensorized NN [19] and auto-associative memory [14].
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have