Abstract

Continuous attractor networks are used to model the storage and representation of analog quantities, such as position of a visual stimulus. The storage of multiple continuous attractors in the same network has previously been studied in the context of self-position coding. Several uncorrelated maps of environments are stored in the synaptic connections, and a position in a given environment is represented by a localized pattern of neural activity in the corresponding map, driven by a spatially tuned input. Here we analyze networks storing a pair of correlated maps, or a morph sequence between two uncorrelated maps. We find a novel state in which the network activity is simultaneously localized in both maps. In this state, a fixed cue presented to the network does not determine uniquely the location of the bump, i.e. the response is unreliable, with neurons not always responding when their preferred input is present. When the tuned input varies smoothly in time, the neuronal responses become reliable and selective for the environment: the subset of neurons responsive to a moving input in one map changes almost completely in the other map. This form of remapping is a non-trivial transformation between the tuned input to the network and the resulting tuning curves of the neurons. The new state of the network could be related to the formation of direction selectivity in one-dimensional environments and hippocampal remapping. The applicability of the model is not confined to self-position representations; we show an instance of the network solving a simple delayed discrimination task.

Highlights

  • The ability to keep an internal representation of a continuous variable in the absence of sensory stimuli, is a crucial requirement in order to succeed in what can be considered trivial day to day actions or experimenter designed tasks

  • How is your position in an environment represented in the brain, and how does the representation distinguish between multiple environments? One of the proposed answers relies on continuous attractor neural networks

  • How do the results described so far change when, instead of storing just two correlated maps, the network encodes a sequence of maps gradually morphed between two uncorrelated ones? Let us start by constructing two random uncorrelated maps, H{1 and H1

Read more

Summary

Introduction

The ability to keep an internal representation of a continuous variable in the absence of sensory stimuli, is a crucial requirement in order to succeed in what can be considered trivial day to day actions or experimenter designed tasks. The temporary maintenance of an item in memory corresponds to a specific network pattern of activity which is stabilized via strengthened recurrent connections between the active neurons in the pattern [8,9,10,11] These connections are usually imposed, or trained, as the outcome of some form of Hebbian learning. The attractor is called continuous when the stable states form a continuous manifold which can be parametrized by the state variables This outcome is obtained under certain conditions on the synaptic connection, for example when the connections between neurons are lateralinhibition like (e.g. Mexican hat) [12,13,14]. The external input links the position on the map to the state variable, forming a representation

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.