Abstract

Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method–called e-prop–approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.

Highlights

  • Connected networks of spiking neurons underlie the astounding information processing capabilities of the brain

  • The neuron fires—i.e., emits a spike—when the membrane potential reaches a firing threshold. It is an open problem how recurrent networks of spiking neurons (RSNNs) can learn, i.e., how their synaptic weights can be modified by local rules for synaptic plasticity so that the computational performance of the network improves

  • Neurons with spikefrequency adaptation (SFA) are quite common in the neocortex[2], and it turns out that their inclusion in the RSNN significantly increases the computational power of the network[3]

Read more

Summary

Results

The key innovation is a rigorous proof (see “Methods”) that the gradient dE dW ji can be represented as a sum of products over the time steps t of the RSNN computation, where the second factor is just a local gradient that does not depend on E:. This local gradient is defined as a sum of products of partial derivatives concerning the hidden state htj of neuron j at time t a y*,t Target b

Evaluation of loss function E
Discussion
Methods
À ðRtn n
Allen Institute
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call