Abstract

Cognitive maps are mental representations of spatial and conceptual relationships in an environment, and are critical for flexible behavior. To form these abstract maps, the hippocampus has to learn to separate or merge aliased observations appropriately in different contexts in a manner that enables generalization and efficient planning. Here we propose a specific higher-order graph structure, clone-structured cognitive graph (CSCG), which forms clones of an observation for different contexts as a representation that addresses these problems. CSCGs can be learned efficiently using a probabilistic sequence model that is inherently robust to uncertainty. We show that CSCGs can explain a variety of cognitive map phenomena such as discovering spatial relations from aliased sensations, transitive inference between disjoint episodes, and formation of transferable schemas. Learning different clones for different contexts explains the emergence of splitter cells observed in maze navigation and event-specific responses in lap-running experiments. Moreover, learning and inference dynamics of CSCGs offer a coherent explanation for disparate place cell remapping phenomena. By lifting aliased observations into a hidden space, CSCGs reveal latent modularity useful for hierarchical abstraction and planning. Altogether, CSCG provides a simple unifying framework for understanding hippocampal function, and could be a pathway for forming relational abstractions in artificial intelligence.

Highlights

  • Cognitive maps are mental representations of spatial and conceptual relationships in an environment, and are critical for flexible behavior

  • Our observations suggest that global remapping, partial remapping, and rate remapping can be explained using clone-structured cognitive graph (CSCG): they are manifestations of learning and inference dynamics using a cloned structure when multiple maps are represented in the same model

  • In this paper we pursued the strong hypothesis that the hippocampus performs a singular sequence learning algorithm that learns a relational, content-agnostic structure, and demonstrated evidence for its validity[4,52]. Realizing this core idea required several interrelated advancements: (1) a learning mechanism to extract higher-order graphs from sequential observations, (2) a storage and representational structure that supports transitivity, (3) efficient context-sensitive and probabilistic retrieval, and (4) and learning of hierarchies that support efficient planning— techniques we developed in this paper

Read more

Summary

Introduction

Cognitive maps are mental representations of spatial and conceptual relationships in an environment, and are critical for flexible behavior To form these abstract maps, the hippocampus has to learn to separate or merge aliased observations appropriately in different contexts in a manner that enables generalization and efficient planning. Using just principles of higher-order sequence learning and probabilistic inference, CSCGs can explain a variety of cognitive map phenomena such as discovering spatial relations from an aliased sensory stream, transitive inference between disjoint episodes of experiences, transferable structural knowledge, and shortcut-finding in novel environments. Dynamic Markov coding makes a higher-order model by splitting the state representing event C into multiple copies, one for each incoming connection, and further specializes their outgoing connections through learning This state cloning mechanism permits a sparse representation of higher-order dependencies, and has been discovered in various domains[22,23,24,25].

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call