Abstract

Many models of classical conditioning fail to describe important phenomena, notably the rapid return of fear after extinction. To address this shortfall, evidence converged on the idea that learning agents rely on latent-state inferences, i.e. an ability to index disparate associations from cues to rewards (or penalties) and infer which index (i.e. latent state) is presently active. Our goal was to develop a model of latent-state inferences that uses latent states to predict rewards from cues efficiently and that can describe behavior in a diverse set of experiments. The resulting model combines a Rescorla-Wagner rule, for which updates to associations are proportional to prediction error, with an approximate Bayesian rule, for which beliefs in latent states are proportional to prior beliefs and an approximate likelihood based on current associations. In simulation, we demonstrate the model’s ability to reproduce learning effects both famously explained and not explained by the Rescorla-Wagner model, including rapid return of fear after extinction, the Hall-Pearce effect, partial reinforcement extinction effect, backwards blocking, and memory modification. Lastly, we derive our model as an online algorithm to maximum likelihood estimation, demonstrating it is an efficient approach to outcome prediction. Establishing such a framework is a key step towards quantifying normative and pathological ranges of latent-state inferences in various contexts.

Highlights

  • Learning and decision-making are fundamental aspects of day-to-day human life

  • An early and influential example of model free learning is the Rescorla-Wagner (RW) model [8], which proposed that associative strength updates in response to new experiences in proportion to the magnitude of a prediction error

  • The purpose of the current work is to introduce a new model of latent-state learning and to verify the suitability and utility of this model for explaining group-level effects of classical conditioning

Read more

Summary

Introduction

Learning and decision-making are fundamental aspects of day-to-day human life. many mental health disorders can be conceptualized from the perspective of biases or errors in learning and decision-making [1]. The study of how humans learn and make decisions is an important topic of inquiry and the past two decades has witnessed a significant surge in the application of computational modeling approaches to the problem of human learning and decision-making [2,3,4,5]. One significant insight this literature has made is the differentiation of model-free and model-based learning [6, 7]. The RW model, and similar model-free formulations that it inspired [9, 10], powerfully explained many aspects of learning

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call