Abstract

The sparse coding hypothesis has enjoyed much success in predicting response properties of simple cells in primary visual cortex (V1) based solely on the statistics of natural scenes. In typical sparse coding models, model neuron activities and receptive fields are optimized to accurately represent input stimuli using the least amount of neural activity. As these networks develop to represent a given class of stimulus, the receptive fields are refined so that they capture the most important stimulus features. Intuitively, this is expected to result in sparser network activity over time. Recent experiments, however, show that stimulus-evoked activity in ferret V1 becomes less sparse during development, presenting an apparent challenge to the sparse coding hypothesis. Here we demonstrate that some sparse coding models, such as those employing homeostatic mechanisms on neural firing rates, can exhibit decreasing sparseness during learning, while still achieving good agreement with mature V1 receptive field shapes and a reasonably sparse mature network state. We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development. To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.

Highlights

  • A central question in systems neuroscience is whether optimization principles can account for the architecture and physiology of the nervous system

  • In order to quantify the similarity between experimentally measured V1 receptive fields and the receptive fields learned by our Sparse and Independent Local network (SAILnet) model, we have further introduced a novel nonparametric RF comparison tool based on image registration techniques

  • Since sparseness can decrease during development, with the mature network state still performing sparse coding, the type of active sparseness maximization disproven by recent experiments [14] is not necessary to produce observed V1 receptive field shapes, nor is it required to learn a sparse representation of natural scenes

Read more

Summary

Introduction

A central question in systems neuroscience is whether optimization principles can account for the architecture and physiology of the nervous system. One candidate principle is sparse coding (SC), which posits that neurons encode input stimuli efficiently: stimuli should be encoded with maximum fidelity while simultaneously using the smallest possible amount of neural activity [1,2]. Much evidence suggests that primary visual cortex (V1) forms sparse representations of visual stimuli [1,3,4,5,6,7]. When trained with natural scenes, SC models have been shown to learn the same types of receptive fields (RFs) as are exhibited by simple cells in macaque primary visual cortex (V1) [1,8]. Throughout this paper, we make reference to the notion of ‘‘sparseness’’. In the Methods section, we define the precise notions of sparseness that we use in this paper

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call