Abstract

Memristive arrays are a natural fit to implement spiking neural network (SNN) acceleration. Representing information as digital spiking events can improve noise margins and tolerance to device variability compared to analog bitline current summation approaches to multiply–accumulate (MAC) operations. Restricting neuron activations to single-bit spikes also alleviates the significant analog-to-digital converter (ADC) overhead that mixed-signal approaches have struggled to overcome. Binarized, and more generally, limited-precision, NNs are considered to trade off computational overhead with model accuracy, but unlike conventional deep learning models, SNNs do not encode information in the precision-constrained amplitude of the spike. Rather, information may be encoded in the spike time as a temporal code, in the spike frequency as a rate code, and in any number of stand-alone and combined codes. Even if activations and weights are bounded in precision, time can be thought of as continuous and provides an alternative dimension to encode information in. This article explores the challenges that face the memristor-based acceleration of NNs and how binarized SNNs (BSNNs) may offer a good fit for these emerging hardware systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call