Abstract

Nonvolatile-memory-based computing-in-memory architecture is one of the solutions to the massive data movement problem in the conventional von Neumann computing architecture since multiplication-and-accumulation (MAC) operations can be directly performed inside the memory array. This paper investigates the errors from the imperfections of resistive random access memory, including program error, read fluctuation and retention drift, and their impacts on the inference accuracy in convolutional neural network. The influences from weight errors in each convolution layer are evaluated according to the change of neuron distributions. A batch normalization (BN) parameter calibration method is proposed in order to correctly scale-and-shift the MAC results to compensate weight errors. This calibrated BN process drastically improves the inference accuracy not only for as-programmed analog ReRAM array but also for devices after longtime retention. This approach provides an effective direction to deal with the nonvolatile-memory-induced errors in artificial neural networks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.