Abstract

Although Resistive RAMs can support highly efficient matrix-vector multiplication, which is very useful for machine learning and other applications, the non-ideal behavior of hardware such as stuck-at fault and IR drop is an important concern in making ReRAM crossbar array-based deep learning accelerators. Previous work has addressed the nonideality problem through either redundancy in hardware, which requires a permanent increase of hardware cost, or software retraining, which may be even more costly or unacceptable due to its need for a training dataset as well as high computation overhead. In this paper we propose a very light-weight method that can be applied on top of existing hardware or software solutions. Our method, called FPT (Forward-Parameter Tuning), takes advantage of a certain statistical property existing in the activation data of neural network layers, and can mitigate the impact of mild nonidealities in ReRAM crossbar arrays for deep learning applications without using any hardware, a dataset, or gradientbased training. Our experimental results using MNIST, CIFAR-10, CIFAR-100, and ImageNet datasets in binary and multibit networks demonstrate that our technique is very effective, both alone and together with previous methods, up to 20rate, which is higher than even some of the previous remapping methods. We also evaluate our method in the presence of other nonidealities such as variability and IR drop. Further, we provide an analysis based on the concept of effective fault rate, which not only demonstrates that effective fault rate can be a useful tool to predict the accuracy of faulty RCA-based neural networks, but also explains why mitigating the SAF problem is more difficult with multi-bit neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call