Abstract

The adaptive changes in synaptic efficacy that occur between spiking neurons have been demonstrated to play a critical role in learning for biological neural networks. Despite this source of inspiration, many learning focused applications using Spiking Neural Networks (SNNs) retain static synaptic connections, preventing additional learning after the initial training period. Here, we introduce a framework for simultaneously learning the underlying fixed-weights and the rules governing the dynamics of synaptic plasticity and neuromodulated synaptic plasticity in SNNs through gradient descent. We further demonstrate the capabilities of this framework on a series of challenging benchmarks, learning the parameters of several plasticity rules including BCM, Oja's, and their respective set of neuromodulatory variants. The experimental results display that SNNs augmented with differentiable plasticity are sufficient for solving a set of challenging temporal learning tasks that a traditional SNN fails to solve, even in the presence of significant noise. These networks are also shown to be capable of producing locomotion on a high-dimensional robotic learning task, where near-minimal degradation in performance is observed in the presence of novel conditions not seen during the initial training period.

Highlights

  • AND RELATED WORKThe dynamic modification of neuronal properties underlies the basis of learning, memory, and adaptive behavior in biological neural networks

  • Building off of Miconi et al (2019), which was focused on Artificial Neural Networks (ANNs), this paper provides a framework for incorporating plasticity and neuromodulation with Spiking Neural Networks (SNNs) trained using gradient descent

  • BCM differs from Oja’s rule in that it has more direct control over potentiation and depression through the use of a dynamic threshold which often represents the average spike rate of each neuron. In this example of differentiable plasticity, we describe a model of BCM, where the dynamics governing the plasticity as well as the stability-providing sliding threshold are learned through backpropagation, which we refer to as Differentiable Plasticity BCM (DP-BCM)

Read more

Summary

AND RELATED WORK

The dynamic modification of neuronal properties underlies the basis of learning, memory, and adaptive behavior in biological neural networks. Networks endowed with plasticity on only the forward propagating weights, with no recurrent self-connections, are shown to be sufficient for solving challenging temporal learning tasks that a traditional SNN fails to solve, even while experiencing significant noise perturbations. These networks are much more capable of adapting to conditions not seen during training, and in some cases displaying near-minimal degradation in performance

DIFFERENTIABLE PLASTICITY
Spiking Neural Network
Spike-Based Differentiable Plasticity
DP-Oja’s
Spike-Based Differentiable Neuromodulation
RESULTS
Noisy Cue-Association
High-Dimensional Robotic Locomotion Task
DISCUSSION
DATA AVAILABILITY STATEMENT
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.