Abstract

Event Abstract Back to Event Optimal network architectures for short-term memory under different biological settings Sukbin Lim1* and Mark Goldman1 1 UC Davis , Center for Neuroscience, United States Short-term memory is thought to be maintained by patterns of neural activity that are initiated by a memorized stimulus and persist long after its offset. Because memory periods are relatively long compared to biophysical time constants of individual neurons, it has been suggested that network interactions can extend the time over which neural activities are sustained. However, the form of such interactions is currently unknown, and experimental and theoretical work has suggested a range of different network architectures that could subserve short-term memory such as feedforward networks or recurrently connected networks implementing attractor dynamics. Here, we explore the conditions under which each network may be optimal in order to gain insight into why different mechanisms might be used by different systems. For each network architecture, we characterize how the fidelity of memory is maintained in the presence of noise by calculating the Fisher information conveyed by the network activity about the stimulus strength at a previous time. Calculations are performed under several biologically relevant conditions, such as common and independent noise, variable memory durations, and additional constraints such as whether the start time of the stimulus to be memorized is known. We first consider low-dimensional ("line attractor") networks that have been suggested to occur in oculomotor and neocortical working memory systems. In the presence of uncorrelated noise, we find a paradoxical result: network performance is benefited by having an imperfect memory-holding mechanism, independent of the level of noise. We show that there is an "optimal forgetting" time constant of decay of network activity that reflects a tradeoff between having a long time constant, so that the signal does not decay, and having a short time constant so that noise does not accumulate too much. This result assumes that noise is presented continually and can build up before the signal arrives. However, if noise enters the system with the signal, or if the animal can anticipate the start of memory performance and reset its neuronal activities, then the perfect integrating mode performs better than any decaying mode. The feedforward network exhibits qualitatively different behavior: the duration of input accumulation is bounded by the number of feedforward stages, and noise flows out the system after some time. This makes the feedforward network better than the line attractor without a reset. However, the reset mechanism does not improve much the performance of the feedforward network, so it performs worse than the line attractor with reset. Together, these results suggest that there may not be a single network architecture that is optimal in all situations. Already our work has suggested how the optimal time constant of decay of activity and network architecture may be different depending upon the experimental setting, the time over which the memory must be stored, and the form in which noise arrives at the network. Currently, we are testing how correlated noise and constraints on synaptic strengths influence the information conveyed by a memory network, and are developing computational methods to find the optimal architecture in different experimental settings. Conference: Computational and Systems Neuroscience 2010, Salt Lake City, UT, United States, 25 Feb - 2 Mar, 2010. Presentation Type: Poster Presentation Topic: Poster session II Citation: Lim S and Goldman M (2010). Optimal network architectures for short-term memory under different biological settings. Front. Neurosci. Conference Abstract: Computational and Systems Neuroscience 2010. doi: 10.3389/conf.fnins.2010.03.00271 Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters. The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated. Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed. For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions. Received: 05 Mar 2010; Published Online: 05 Mar 2010. * Correspondence: Sukbin Lim, UC Davis, Center for Neuroscience, Davis, United States, sukbin@uchicago.edu Login Required This action requires you to be registered with Frontiers and logged in. To register or login click here. Abstract Info Abstract The Authors in Frontiers Sukbin Lim Mark Goldman Google Sukbin Lim Mark Goldman Google Scholar Sukbin Lim Mark Goldman PubMed Sukbin Lim Mark Goldman Related Article in Frontiers Google Scholar PubMed Abstract Close Back to top Javascript is disabled. Please enable Javascript in your browser settings in order to see all the content on this page.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call