Abstract

The memristor-based neuromorphic computing system (NCS) with emerging storage and computing integration architecture has drawn extensive attention. Because of the unique nonvolatility and programmability, the memristor is an ideal nano-device to realize neural synapses in VLSI circuit implementation of neural networks. However, in the hardware implementation, the performance of the memristive neural network is always affected by quantization error, writing error, and conductance drift, which seriously hinders its applications in practice. In this paper, a novel weight optimization scheme combining quantization and Bayesian inference is proposed to alleviate this problem. Specifically, the weight deviation in the memristive neural network is transformed into the weight uncertainty in the Bayesian neural network, which can make the network insensitive to unexpected weight changes. A quantization regularization term is designed and utilized during the training process of the Bayesian neural network, reducing the quantization error and improving the robustness of the network. Furthermore, a partial training method is raised to extend the applicability of the proposed scheme in large-scale neural networks. Finally, the experiments on a Multilayer Perceptron and LeNet demonstrate that the proposed weight optimization scheme can significantly enhance the robustness of memristive neural networks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.