Abstract

For resistive RAM (RRAM)-based deep neural network (DNN), random telegraph noise (RTN) causes accuracy loss during inference. In this article, we systematically investigated the impact of RTN on the complex DNNs with different data sets. By using eight mainstream DNNs and four data sets, we explored the origin that caused the RTN-induced accuracy loss. Based on the understanding, for the first time, we proposed a new method to estimate the accuracy loss. The method was verified with other ten DNN/data set combinations that were not used for establishing the method. Finally, we discussed its potential adoption for the cooptimization of the DNN architecture and the RRAM technology, paving ways to RTN-induced accuracy loss mitigation for future neuromorphic hardware systems.

Highlights

  • The version presented here may differ from the published version or from the version of the record

  • With the same level of Random telegraph noise (RTN), Cifar10 and Fashion datasets show over 30% losses with the same deep neural networks (DNNs), which is intolerable in practice

  • We investigated the impact of RTN on the inference accuracy for complex deep neural networks

Read more

Summary

INTRODUCTION

Operating for Artificial Intelligence (AI) has become the critical driver for edge computing, which is crucial to solve the latency issues for future internet-of-thing applications [1]. Several pioneer works [4, 9,10,11] has investigated the RTN-induced accuracy loss They only assessed the simple perceptron network with simple datasets such as MNIST. The distribution of DIFF value, which is a figure of merit we defined in this work and can be extracted from any DNN with any dataset, exhibits a strong correlation with the RTN-induced accuracy loss. Based on this understanding, we proposed a new fast method for assessing the RTN-induced accuracy loss of mainstream. We show the potential use of this method for RTN mitigation through codesign between DNN architecture and RRAM technology

Empirical model for RTN simulation
Acceleration for large-scale DNN simulation
RTN-INDUCED ACCURACY LOSS FOR COMPLEX DNNS
Impact of the DNN size
Impact of the pulse width
Impact of the different layers in the DNN
Origin for the RTN-induced accuracy loss in DNNs
Method for the fast assessment
Method validation
Findings
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call