Abstract
Resistive switching random access memory (RRAM) has been explored to accelerate the computation of neural networks. RRAM with linear conductance modulation is usually required for the efficient weight updating during the online training according to the back-propagation algorithm. However, most RRAM devices usually show the nonlinear characteristic. Here, to overcome the dilemma, we designed a novel weight updating principle for binarized neural networks, which enables the nonlinear RRAM to realize the weight updating in efficiency during online training. Moreover, a vector-matrix multiplication is designed to parallel calculate the dot-products of the forward and backward propagation. 1 kb nonlinear RRAM array is fabricated to demonstrate the feasibility of the analog accumulation and the parallel vector-matrix multiplication. The results achieved in this work offer new solutions for future energy efficient neural networks.
Highlights
The first generation of AlphaGo ran on 1920 central processing units and 280 graphics processing units with a peak power of half a megawatt.6 there are severe design constraints in the performance, power, and area of the neural network system for the edge computing side,7,8 such as wearable devices, smart sensors, and automatic drive
Resistive switching random access memory (RRAM) with linear conductance modulation is usually required for the efficient weight updating during the online training according to the back-propagation algorithm
To overcome the dilemma, we designed a novel weight updating principle for binarized neural networks, which enables the nonlinear RRAM to realize the weight updating in efficiency during online training
Summary
The first generation of AlphaGo ran on 1920 central processing units and 280 graphics processing units with a peak power of half a megawatt.6 there are severe design constraints in the performance, power, and area of the neural network system for the edge computing side,7,8 such as wearable devices, smart sensors, and automatic drive. RRAM with linear conductance modulation is usually required for the efficient weight updating during the online training according to the back-propagation algorithm.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have