Abstract

Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations. By the same token, old memories are modified when old and new sensory observations are inferred to have been generated by the same latent cause. We derive this framework from probabilistic principles, and present a computational implementation. Simulations demonstrate that our model can reproduce the major experimental findings from studies of memory modification in the Pavlovian conditioning literature.

Highlights

  • In both humans and animals, memory retrieval is a significant learning event (Dudai, 2012; Roediger and Butler, 2011; Spear, 1973)

  • We develop a latent cause theory of Pavlovian conditioning that treats context as the input into a structure learning system, which outputs a parse of experience into latent causes—hypothetical entities in the environment that govern the distribution of stimulus configurations (Courville, 2006; Courville et al, 2006; Gershman et al, 2010; Gershman and Niv, 2012; Gershman et al, 2013a, 2015; Soto et al, 2014)

  • This allows the theory to move beyond the ‘extinction=unlearning’ assumption by positing that different latent causes are inferred during acquisition and extinction, and two different associations are learned

Read more

Summary

Introduction

In both humans and animals, memory retrieval is a significant learning event (Dudai, 2012; Roediger and Butler, 2011; Spear, 1973). A memory’s strength and content can be modified immediately after retrieval, and this malleability is often more potent than new learning without retrieval. While this phenomenon is well-documented, its underlying mechanisms remain obscure. Central to our theory is the idea that memory is inferential in nature: Decisions about when to modify an old memory or form a new memory are guided by inferences about the latent causes of sensory data (Gershman et al, 2010, 2014, 2015). Memories contain statistical information about inferred latent causes (when they are likely to occur, what sensory data they tend to generate). We formalize this idea as a probabilistic model, and demonstrate its explanatory power by simulating a wide range of post-retrieval memory modification phenomena

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call