Abstract

The parallel updating scheme of RRAM-based analog neuromorphic systems based on sign stochastic gradient descent (SGD) can dramatically accelerate the training of neural networks. However, sign SGD can decrease accuracy. Also, some non-ideal factors of RRAM devices, such as intrinsic variations and the quantity of intermediate states, may significantly damage their convergence. In this paper, we analyzed the effects of these issues on the parallel updating scheme and found that it performed poorly on the task of MNIST recognition when the number of intermediate states was limited or the variation was too large. Thus, we propose a weighted synapse method to optimize the parallel updating scheme. Weighted synapses consist of major and minor synapses with different gain factors. Such a method can be widely used in RRAM-based analog neuromorphic systems to increase the number of equivalent intermediate states exponentially. The proposed method also generates a more suitable ΔW, diminishing the distortion caused by sign SGD. Unlike when several RRAM cells are combined to achieve higher resolution, there are no carry operations for weighted synapses, even if a saturation on the minor synapses occurs. The proposed method also simplifies the circuit overhead, rendering it highly suitable to the parallel updating scheme. With the aid of weighted synapses, convergence is highly optimized, and the error rate decreases significantly. Weighted synapses are also robust against the intrinsic variations of RRAM devices.

Highlights

  • Deep learning has made significant advances in many areas, such as image/speech recognition and natural language processing (LeCun et al, 2015)

  • It is a commonly known problem that if the learning rate is too large, the cost function J may not decrease on every iteration, and stochastic gradient descent (SGD) may never converge

  • This paper proposes a new method based on weighted synapses to optimize the parallel updating scheme in resistive random access memory (RRAM)-based neural networks

Read more

Summary

INTRODUCTION

Deep learning has made significant advances in many areas, such as image/speech recognition and natural language processing (LeCun et al, 2015). Weighted Synapses Without Carry Operations neuromorphic systems can dramatically accelerate the training of neural networks by carrying out certain parallel update schemes (Burr et al, 2015; Kataeva et al, 2015; Gokmen and Vlasov, 2016; Fuller et al, 2017; Gokmen et al, 2017). We demonstrate the computational process of the customized backpropagation (BP) algorithm based on sign SGD It describes the updating logic of the proposed parallel update scheme. It demonstrates a two-layer perceptron (784 × 200 × 10) for the task of handwritten digit recognition trained on the MNIST database and analyzes the effects of some non-ideal factors of RRAM devices.

Simulation Results and Analyses
Simulation Results
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call