Abstract

A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.

Highlights

  • Neurons in biological circuits form intricate networks in which the primary mode of communication occurs through spikes

  • The details of how these artificial rate–based networks are trained may arguably be different from how the brain learns, several studies have begun to draw interesting parallels between the internal representations formed by deep neural networks and the recorded activity from different brain regions (Yamins et al, 2014; McClure & Kriegeskorte, 2016; McIntosh, Maheswaranathan, Nayebi, Ganguli, & Baccus, 2016; Marblestone, Wayne, & Kording, 2016)

  • We develop a novel learning rule to train multilayer SNNs of deterministic leaky integrate-and-fire (LIF) neurons on tasks that fundamentally involve spatiotemporal spike pattern transformations

Read more

Summary

Introduction

Neurons in biological circuits form intricate networks in which the primary mode of communication occurs through spikes. Building meaningful spiking models of brain-like neural networks in silico is a largely doi:10.1162/neco_a_01086. The details of how these artificial rate–based networks are trained may arguably be different from how the brain learns, several studies have begun to draw interesting parallels between the internal representations formed by deep neural networks and the recorded activity from different brain regions (Yamins et al, 2014; McClure & Kriegeskorte, 2016; McIntosh, Maheswaranathan, Nayebi, Ganguli, & Baccus, 2016; Marblestone, Wayne, & Kording, 2016). A major impediment to deriving a similar comparison at the spiking level is that we currently lack efficient ways of training spiking neural network (SNNs), thereby limiting their applications to mostly small toy problems that do not fundamentally involve spatiotemporal spike time computations. Only recently have some groups begun to train SNNs on data sets such as MNIST (Diehl & Cook, 2015; Guerguiev, Lillicrap, & Richards, 2017; Neftci, Augustine, Paul, & Detorakis, 2016; Petrovici et al, 2017), whereas most previous studies have used smaller artificial data sets

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call