Abstract

Resistive switching random access memory (RRAM) shows its potential to be a promising candidate as the basic in-memory computing unit for deep neural networks (DNN) accelerator design due to its non-volatile, low power, and small footprint properties. The RRAM based crossbar array (RRAM CBA) is usually employed to accelerate DNN because of its intrinsic characteristic of executing multiplication-and-accumulation (MAC) operation according to Kirchhoffs' law. However, some major non-ideal effects including IR-drop and Stuck-at-Faults (SAF) in real RRAM CBA are typically ignored in the DNN accelerator design because of the considerations of training speed and design closure. Such non-ideal effects will conduct the variations of output column current and voltage and further cause serious degradation in computing accuracy. Thus, direct mapping from the weights of DNN model without considering the IR-drop and SAF to RRAM CBA is unrealistic. In this work, two efficient and optimized methods including adding additional tunable RRAM row and Trans-impedance amplifier (TIA) based RRAM are proposed to recover the computation accuracy with a view to reducing variation of the output column current of the RRAM CBA and output voltage of the TIA in each column respectively. The two optimized methods are evaluated in different sizes of RRAM CBA and different resistance levels of RRAM cell. The simulation results show that the two optimized methods can further suppress the degradation of computing accuracy induced by IR-drop and SAF for LeNet-5 with the MNIST dataset and VGG16 with the CIFAR-10 dataset.

Highlights

  • In the big-data era, there is an ever-increasing requirement for the performance improvement of data processing

  • We propose a simple optimization method of adding a tunable Resistive switching random access memory (RRAM) row in the RRAM crossbar array (CBA) to compensate for the output current deviation in the CBA

  • The actual column output current can be gotten after simulation; (4) We calculate the average difference of the current shift between the ideal and actual situations; (5)The resistance values of each RRAM in the additional row or RRAM-Trans-impedance amplifier (TIA) are obtained based on the results of step (2) to compensate the current or voltage differentials in each column; (6) We calculate the average accuracy of RRAM CBA after optimization for all of the testing data with considering non-ideal effects

Read more

Summary

INTRODUCTION

In the big-data era, there is an ever-increasing requirement for the performance improvement of data processing. There are several non-ideal effects in the arraylevel, e.g. IR-drop [14], [15] or cell-level, e.g. Stuck-at-Fault (SAF) [16], [17] Such effects introduce limitations on the computational accuracy of the RRAM-based DNN accelerator. The SAF (including Stuck-At-0 (SA0) and Stuck-At-1 (SA1)) caused by the cell failure (corresponding to the device get stuck at low resistance state (LRS) and high resistance state (HRS)) damages the expected weight pattern [19] Both of the above effects reduce the computational accuracy if the ideal DNN is directly mapped to the realistic RRAM CBA. We propose two efficient and optimized hardware methods to suppress the shift of the output current and voltage This addresses the accuracy loss problems caused by the IR-drop and SAF without retraining the neural network.

PRELIMINARIES
IR-drop and SAF
PROPOSED METHODOLOGY
Additional tunable RRAM row
Trans-impedance amplifier based RRAM
Test Accuracy
SIMULATION AND RESULT
Results on IR-drop
Results on IR-drop and SAF
Findings
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call