Abstract

On-chip implementation of large-scale neural networks with emerging synaptic devices is attractive but challenging, primarily due to the pre-mature analog properties of today's resistive memory technologies. This work aims to realize a large-scale neural network using today's available binary RRAM devices for image recognition. We propose a methodology to binarize the neural network parameters with a goal of reducing the precision of weights and neurons to 1-bit for classification and <8-bit for online training. We experimentally demonstrate the binary neural network (BNN) on Tsinghua's 16 Mb RRAM macro chip fabricated in 130 nm CMOS process. Even under finite bit yield and endurance cycles, the system performance on MNIST handwritten digit dataset achieves ∼96.5% accuracy for both classification and online training, close to ∼97% accuracy by the ideal software implementation. This work reports the largest scale of the synaptic arrays and achieved the highest accuracy so far.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call