Abstract

Normative models of human cognition often appeal to Bayesian filtering, which provides optimal online estimates of unknown or hidden states of the world, based on previous observations. However, in many cases it is necessary to optimise beliefs about sequences of states rather than just the current state. Importantly, Bayesian filtering and sequential inference strategies make different predictions about beliefs and subsequent choices, rendering them behaviourally dissociable. Taking data from a probabilistic reversal task we show that subjects’ choices provide strong evidence that they are representing short sequences of states. Between-subject measures of this implicit sequential inference strategy had a neurobiological underpinning and correlated with grey matter density in prefrontal and parietal cortex, as well as the hippocampus. Our findings provide, to our knowledge, the first evidence for sequential inference in human cognition, and by exploiting between-subject variation in this measure we provide pointers to its neuronal substrates.

Highlights

  • Model-based approaches to cognition posit that agents continually perform online inference about the current state of the world, based on incoming sensory information

  • It is often assumed that agents form and update beliefs only about the current state of the world, an approach known as Bayesian filtering

  • Very little is known about whether humans adopt such sequential inference strategies, and if they do, about the neuronal mechanisms involved. We addressed this by applying computational modelling to data collected during a probabilistic reversal task

Read more

Summary

Introduction

Model-based approaches to cognition posit that agents continually perform online inference about the current state of the world, based on incoming sensory information. These approaches assume that the agent optimises beliefs about the current state of the world, referred to as Bayesian filtering. Since the joint probability of a sequence of states is not, in general, the same as the (product of marginal) probabilities of the individual states considered individually, this leads to two alternative definitions of optimality: optimality of inference about individual states, and optimality of inference about sequences of states These diverging goals are captured in the sum-product and max-sum algorithms for exact inference [1]). Compared with Bayesian filtering this entails relatively minor increases in computational cost and represents a plausible hypothesis regarding implementation of actual cognitive processes in human subjects

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call