Abstract

SummaryFinding spike-based learning algorithms that can be implemented within the local constraints of neuromorphic systems, while achieving high accuracy, remains a formidable challenge. Equilibrium propagation is a promising alternative to backpropagation as it only involves local computations, but hardware-oriented studies have so far focused on rate-based networks. In this work, we develop a spiking neural network algorithm called EqSpike, compatible with neuromorphic systems, which learns by equilibrium propagation. Through simulations, we obtain a test recognition accuracy of 97.6% on the MNIST handwritten digits dataset (Mixed National Institute of Standards and Technology), similar to rate-based equilibrium propagation, and comparing favorably to alternative learning techniques for spiking neural networks. We show that EqSpike implemented in silicon neuromorphic technology could reduce the energy consumption of inference and training, respectively, by three orders and two orders of magnitude compared to graphics processing units. Finally, we also show that during learning, EqSpike weight updates exhibit a form of spike-timing-dependent plasticity, highlighting a possible connection with biology.

Highlights

  • Spike-based neuromorphic systems have, in recent years, demonstrated outstanding energy efficiency on inference tasks (Merolla et al, 2014)

  • Synaptic values are updated by probing the neuron states after (Scellier and Bengio, 2017) or during (Ernoult et al, 2020) the nudging phase through a learning rule that has been shown theoretically and numerically to match the updates of BPTT, the state-of-the-art algorithm for such recurrent neural networks (Ernoult et al, 2019)

  • In this work, we present a new algorithm for spiking neural networks, EqSpike, compatible with neuromorphic systems, and achieving good performance on MNIST

Read more

Summary

Introduction

Spike-based neuromorphic systems have, in recent years, demonstrated outstanding energy efficiency on inference tasks (Merolla et al, 2014). STDP weight updates generally do not minimize a global objective function for the network, and the accuracy of STDP-trained neural networks remains below state-of-the-art algorithms based on the error backpropagation (Falez et al, 2019). The first two take into account, as usual, the behavior of pre- and postneurons, and the third allows for the introduction of an additional error factor. This third factor leads to implementations on neuromorphic chips that are less compact, and possibly less energy efficient, than twofactor learning rules such as STDP (Payvand et al, 2020)

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call