Abstract

Spiking networks that perform probabilistic inference have been proposed both as models of cortical computation and as candidates for solving problems in machine learning. However, the evidence for spike-based computation being in any way superior to non-spiking alternatives remains scarce. We propose that short-term synaptic plasticity can provide spiking networks with distinct computational advantages compared to their classical counterparts. When learning from high-dimensional, diverse datasets, deep attractors in the energy landscape often cause mixing problems to the sampling process. Classical algorithms solve this problem by employing various tempering techniques, which are both computationally demanding and require global state updates. We demonstrate how similar results can be achieved in spiking networks endowed with local short-term synaptic plasticity. Additionally, we discuss how these networks can even outperform tempering-based approaches when the training data is imbalanced. We thereby uncover a powerful computational property of the biologically inspired, local, spike-triggered synaptic dynamics based simply on a limited pool of synaptic resources, which enables them to deal with complex sensory data.

Highlights

  • Neural networks are, once again, in the focus of both the artificial and the biological intelligence communities

  • We study the effects of short-term plasticity (STP) on the performance of spiking networks trained for different tasks

  • We start by discussing how STP can improve the sampling accuracy of small networks configured to sample from a fully specified target distribution, even when the energy landscape is shallow enough to not cause mixing problems

Read more

Summary

Introduction

Once again, in the focus of both the artificial and the biological intelligence communities. Inspired by the dynamics and architecture of cortical networks[1,2], they have increasingly strayed away from their biological archetypes, prompting questions about their relevance for understanding the brain[3,4]. Their recent hardware-fueled dominance[5] has motivated renewed efforts to align them with biologically more plausible models[6,7,8,9]. The brain appears to learn a generative model of its sensory environment[18,19,20] How these capabilities are achieved remains an open question, but it is unlikely that complex tempering schedules are at work. STP can be viewed as only modulating the energy landscape locally, thereby only affecting the currently active attractor (bottom)

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.