Abstract

Deep analog artificial neural networks (ANNs) perform complex classification problems with remarkably high accuracy. However, they rely on humongous amount of power to perform the calculations, veiling the accuracy benefits. The biological brain, on the other hand, is significantly more powerful than such networks and consumes orders of magnitude less power, indicating us about some conceptual mismatch. Given that the biological neurons communicate using energy efficient trains of spikes, and the behavior is nondeterministic, incorporating these effects in deep artificial neural networks may drive us few steps toward a more realistic neuron. In this paper, we propose how the inherent stochasticity of nanoscale resistive devices can be harnessed to emulate the functionality of a spiking neuron that can be incorporated in deep stochastic spiking neural networks (SNN). At the algorithmic level, we propose how the training can be modified to convert an ANN to an SNN while supporting the stochastic activation function offered by these devices. We devise circuit architectures to incorporate stochastic memristive neurons along with memristive crossbars, which perform the functionality of the synaptic weights. We tested the proposed all-memristor deep stochastic SNN for image classification and observed only about 1% degradation in accuracy with the ANN baseline after incorporating the circuit and device related nonidealities. We witnessed that the network is robust to certain variations and consumes ~6.4 × less energy than its complementary metal oxide semiconductor (CMOS) counterpart.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call