Abstract

Here we explore the possibility that a core function of sensory cortex is the generation of an internal simulation of sensory environment in real-time. A logical elaboration of this idea leads to a dynamical neural architecture that oscillates between two fundamental network states, one driven by external input, and the other by recurrent synaptic drive in the absence of sensory input. Synaptic strength is modified by a proposed synaptic state matching (SSM) process that ensures equivalence of spike statistics between the two network states. Remarkably, SSM, operating locally at individual synapses, generates accurate and stable network-level predictive internal representations, enabling pattern completion and unsupervised feature detection from noisy sensory input. SSM is a biologically plausible substrate for learning and memory because it brings together sequence learning, feature detection, synaptic homeostasis, and network oscillations under a single unifying computational framework.

Highlights

  • The search for function of cortical circuits has been a central focus of neuroscience since the pioneering works of Mountcastle [1] and Hubel & Wiesel [2]

  • To what extent can this high-level truism inform us about neural architecture and computation operating at the very lowest levels? Here I show that a logical extension of this principle, down to the scale of individual synapses, naturally leads to a dynamical neural architecture that generates predictive internal representations, enabling feature detection from noisy sensory input

  • Arguing from first principles, I will motivate a parsimonious neural architecture design capable of simulating dynamical systems through an inherently stable synaptic modification process operating on strictly local information

Read more

Summary

Introduction

The search for function of cortical circuits has been a central focus of neuroscience since the pioneering works of Mountcastle [1] and Hubel & Wiesel [2]. A guiding hypothesis behind much of this work is the existence of a canonical microcircuit whose basic computation is critical to sensory processing throughout the cortex. Detailed micro-architectural maps [3] show great promise in advancing our understanding of these circuits. These efforts will greatly benefit from constraints on the nature of the computational task itself. Cognitive systems generate internal representations of the outside world that facilitate adaptive behaviors. I show that a logical extension of this principle, down to the scale of individual synapses, naturally leads to a dynamical neural architecture that generates predictive internal representations, enabling feature detection from noisy sensory input To what extent can this high-level truism inform us about neural architecture and computation operating at the very lowest levels? Here I show that a logical extension of this principle, down to the scale of individual synapses, naturally leads to a dynamical neural architecture that generates predictive internal representations, enabling feature detection from noisy sensory input

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.