Abstract
Our cognition relies on the ability of the brain to segment hierarchically structured events on multiple scales. Recent evidence suggests that the brain performs this event segmentation based on the structure of state-transition graphs behind sequential experiences. However, the underlying circuit mechanisms are poorly understood. In this paper we propose an extended attractor network model for graph-based hierarchical computation which we call the Laplacian associative memory. This model generates multiscale representations for communities (clusters) of associative links between memory items, and the scale is regulated by the heterogenous modulation of inhibitory circuits. We analytically and numerically show that these representations correspond to graph Laplacian eigenvectors, a popular method for graph segmentation and dimensionality reduction. Finally, we demonstrate that our model exhibits chunked sequential activity patterns resembling hippocampal theta sequences. Our model connects graph theory and attractor dynamics to provide a biologically plausible mechanism for abstraction in the brain.
Highlights
The brain builds a hierarchical knowledge structure through the abstraction of conceptual building blocks such as groups and segments
We show that the attractor dynamics of recurrent neural circuits offer a biologically plausible way for hierarchical segmentation
We found that an extended model of associative memory autonomously performs segmentation by finding groups of tightly linked memories
Summary
The brain builds a hierarchical knowledge structure through the abstraction of conceptual building blocks such as groups and segments. It has been shown that event segmentation performed by human subjects behaviorally reflects community structures (or clusters) of such state-transition graphs, and neurobiologically, sensory events within the same community are represented by more similar activity patterns than those belonging to other communities [6,7]. Such graph segmentation of events is considered to benefit the temporal abstraction of actions in reinforcement learning [8,9]. Graph-based representations can explain many characteristics of place cells and entorhinal grid cells [10]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have