Abstract

Cortical neurons are silent most of the time: sparse activity enables low-energy computation in the brain, and promises to do the same in neuromorphic hardware. Beyond power efficiency, sparse codes have favourable properties for associative learning, as they can store more information than local codes but are easier to read out than dense codes. Auto-encoders with a sparse constraint can learn sparse codes, and so can single-layer networks that combine recurrent inhibition with unsupervised Hebbian learning. But the latter usually require fast homeostatic plasticity, which could lead to catastrophic forgetting in embodied agents that learn continuously. Here we set out to explore whether plasticity at recurrent inhibitory synapses could take up that role instead, regulating both the population sparseness and the firing rates of individual neurons. We put the idea to the test in a network that employs compartmentalised inputs to solve the task: rate-based dendritic compartments integrate the feedforward input, while spiking integrate-and-fire somas compete through recurrent inhibition. A somato-dendritic learning rule allows somatic inhibition to modulate nonlinear Hebbian learning in the dendrites. Trained on MNIST digits and natural images, the network discovers independent components that form a sparse encoding of the input and support linear decoding. These findings confirm that intrinsic homeostatic plasticity is not strictly required for regulating sparseness: inhibitory synaptic plasticity can have the same effect. Our work illustrates the usefulness of compartmentalised inputs, and makes the case for moving beyond point neuron models in artificial spiking neural networks.

Highlights

  • Activity in the brain is sparse: a given pyramidal neuron spikes infrequently, and few neurons are active at once

  • Our model is a network of N neurons, each of which consists of a spiking, leaky integrate-and-fire (LIF) soma, and a rate-based dendritic compartment (Fig. 1)

  • The receptive fields do change rapidly when we switch from the full MNIST to zeros only: they adapt to match the new distribution of the independent components and forget the features that were specific to other digits

Read more

Summary

Introduction

Activity in the brain is sparse: a given pyramidal neuron spikes infrequently (lifetime sparseness), and few neurons are active at once (population sparseness). Starting with Földiák (1990), these two heuristics have been applied in a variety of sparse coding networks with rate-based (Butko & Triesch, 2007; Falconbridge, Stamps, & Badcock, 2006; Lucke, 2007) and spiking neurons (Ferré, Mamalet, & Thorpe, 2018; King, Zylberberg, & DeWeese, 2013; Savin, Joshi, & Triesch, 2010; Zylberberg, Murphy, & DeWeese, 2011) These networks have in common the use of Hebbian lateral inhibition to decorrelate the output, and of nonlinear Hebbian rules to perform projection pursuit on the feedforward input. We found that one can adjust the somatic and dendritic transfer functions to produce a BCM-like curve where the threshold between depression and potentiation follows an instantaneous measure of somatic inhibition This lets the network learn sparse codes by regulating population sparseness instead of lifetime sparseness, and does not require fast intrinsic plasticity

Network model
Feedforward learning rules
Recurrent learning rule
Receptive fields
Linear decoding
Sparseness
Stability and response to perturbations
Sparse coding does not require fast IP
Compartmentalised inputs let local rules estimate population sparseness
Sparse coding via population sparseness is robust to input deprivation
Sparse activity does not imply sparse coding
Comparison with similar models
Biological interpretation
Relevance for machine learning
Somatic compartments and somatic synapses
Dendritic compartments
Measure of sparseness
Natural images
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call