Resistive switching random access memory (RRAM) shows its potential to be a promising candidate as the basic in-memory computing unit for deep neural networks (DNN) accelerator design due to its non-volatile, low power, and small footprint properties. The RRAM based crossbar array (RRAM CBA) is usually employed to accelerate DNN because of its intrinsic characteristic of executing multiplication-and-accumulation (MAC) operation according to Kirchhoffs' law. However, some major non-ideal effects including IR-drop and Stuck-at-Faults (SAF) in real RRAM CBA are typically ignored in the DNN accelerator design because of the considerations of training speed and design closure. Such non-ideal effects will conduct the variations of output column current and voltage and further cause serious degradation in computing accuracy. Thus, direct mapping from the weights of DNN model without considering the IR-drop and SAF to RRAM CBA is unrealistic. In this work, two efficient and optimized methods including adding additional tunable RRAM row and Trans-impedance amplifier (TIA) based RRAM are proposed to recover the computation accuracy with a view to reducing variation of the output column current of the RRAM CBA and output voltage of the TIA in each column respectively. The two optimized methods are evaluated in different sizes of RRAM CBA and different resistance levels of RRAM cell. The simulation results show that the two optimized methods can further suppress the degradation of computing accuracy induced by IR-drop and SAF for LeNet-5 with the MNIST dataset and VGG16 with the CIFAR-10 dataset.
Read full abstract