Abstract

In recent years, memristive crossbar-based neuromorphic computing systems (NCS) have obtained extremely high performance in neural network acceleration. However, adversarial attacks and conductance variations of memristors bring reliability challenges to NCS design. First, adversarial attacks can fool the neural network and pose a serious threat to security critical applications. However, device variations lead to degradation of the network accuracy. In this article, we propose DFS (Deep neural network Feature importance Sampling) and BFS (Bayesian neural network Feature importance Sampling) training strategies, which consist of Bayesian Neural Network (BNN) prior setting, clustering-based loss function, and feature importance sampling techniques, to simultaneously combat device variation, white-box attack, and black-box attack challenges. Experimental results clearly demonstrate that the proposed training framework can improve the NCS reliability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call