Abstract

A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems.

Highlights

  • Persistent neural activity is typically characterized as a sustained increase in neural firing, sometimes lasting up to several seconds, and usually following a brief stimulus

  • We suggest a generalization of this rule that may be exploited by a wide variety of neural systems to induce stability in higher-dimensional spaces, like those possibly used in the head-direction and path integration systems in the rat [33,5,34,14]

  • Application of the learning rule to the oculomotor integrator To demonstrate the effectiveness of the proposed learning rule, we present the results of the ten experiments in order to benchmark the system and reproduce a variety of plasticity observations in the oculomotor system

Read more

Summary

Introduction

Persistent neural activity is typically characterized as a sustained increase in neural firing, sometimes lasting up to several seconds, and usually following a brief stimulus. As demonstrated by [15], precise tuning of recurrent connection weights is required to achieve appropriate persistent activity in this class of simple recurrent networks. In the oculomotor integrator, which has long been a central experimental target for characterizing persistent activity in a biological setting [1,2,17,18], it is known that the precision of the recurrent weights required to induce drifts slow enough to match the observed behavior is quite high [19]. The 1% accuracy refers to the accuracy of tuning the unity eigenvalue of the recurrent weight matrix It can be expressed as the ratio of the physical connection time constant, tsyn, to system time constant [2]. The evidence supporting these more exotic possibilities in the relevant neural systems is quite weak [13]

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.