Abstract

The theory of attractor neural networks has been influential in our understanding of the neural processes underlying spatial, declarative, and episodic memory. Many theoretical studies focus on the inherent properties of an attractor, such as its structure and capacity. Relatively little is known about how an attractor neural network responds to external inputs, which often carry conflicting information about a stimulus. In this paper we analyze the behavior of an attractor neural network driven by two conflicting external inputs. Our focus is on analyzing the emergent properties of the megamap model, a quasi-continuous attractor network in which place cells are flexibly recombined to represent a large spatial environment. In this model, the system shows a sharp transition from the winner-take-all mode, which is characteristic of standard continuous attractor neural networks, to a combinatorial mode in which the equilibrium activity pattern combines embedded attractor states in response to conflicting external inputs. We derive a numerical test for determining the operational mode of the system a priori. We then derive a linear transformation from the full megamap model with thousands of neurons to a reduced 2-unit model that has similar qualitative behavior. Our analysis of the reduced model and explicit expressions relating the parameters of the reduced model to the megamap elucidate the conditions under which the combinatorial mode emerges and the dynamics in each mode given the relative strength of the attractor network and the relative strength of the two conflicting inputs. Although we focus on a particular attractor network model, we describe a set of conditions under which our analysis can be applied to more general attractor neural networks.

Highlights

  • The theory of attractor neural networks has greatly influenced our understanding of the mechanisms underlying the computations performed by neural networks

  • We find that the two models behave in the same way qualitatively (Fig. 3), and the analytical tractability of the 2-unit model permits us to derive explicit equations for the set of parameters leading to each operational mode and the relative strength of external input leading to hysteresis or two co-stable activity bumps

  • We present a mathematical analysis of the properties of the megamap attractor neural network that emerge when the network represents a sufficiently large spatial environment [10]

Read more

Summary

Introduction

The theory of attractor neural networks has greatly influenced our understanding of the mechanisms underlying the computations performed by neural networks. The combinatorial mode is an interesting emergent property of the model that may be related to the partial remapping of hippocampal place cells sometimes observed when an animal is introduced to a new environment that simultaneously resembles two different familiar environments In this cue conflict situation, the evoked neural responses are often mixtures of the responses to both environments rather than representations of one environment only [31]. The combinatorial mode emerges in the megamap model in sufficiently large environments when the weights are set optimally through gradient descent but not when the weights are set by the basic Hebbian learning rule [32, 33] The latter method is widely used in attractor network models of place cells representing multiple environments [5, 6, 34,35,36].

Megamap Model
Numerical Example of the Operational Modes of the Megamap
Numerical Test for the Operational Mode
Reduction of the Megamap Model to the 2-Unit Model
Constraints on the Parameters of the 2-Unit Model
Analysis of the Operational Modes of the 2-Unit Model
Characterization of the Operational Modes
Bifurcations of the Dynamical System
Conclusions
One Active Unit
Two Active Units
No Active Units
Findings

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.