Abstract

Extracting the temporal structure of sequences of events is crucial for perception, decision-making, and language processing. Here, we investigate the mechanisms by which the brain acquires knowledge of sequences and the possibility that successive brain responses reflect the progressive extraction of sequence statistics at different timescales. We measured brain activity using magnetoencephalography in humans exposed to auditory sequences with various statistical regularities, and we modeled this activity as theoretical surprise levels using several learning models. Successive brain waves related to different types of statistical inferences. Early post-stimulus brain waves denoted a sensitivity to a simple statistic, the frequency of items estimated over a long timescale (habituation). Mid-latency and late brain waves conformed qualitatively and quantitatively to the computational properties of a more complex inference: the learning of recent transition probabilities. Our findings thus support the existence of multiple computational systems for sequence processing involving statistical inferences at multiple scales.

Highlights

  • From seasons’ cycle to speech, events in the environment are rarely independent from one another; instead they are often structured in time

  • In order to investigate whether those brain responses were modulated by the statistical regularities of the auditory sequences, we explored two timescales at which those regularities may emerge in our experiment: global and local

  • We show that a parsimonious model, the estimation of local transition probabilities (TP) (p(A|B) = 1 À p(B|B) and p(B|A) = 1 À p(A|A), TP model) predicts the effects of local and global statistics observed in brain responses and that those effects are incompatible with two other models, learning either the local frequency of items (i.e. p(A) = 1 À p(B), item frequency (IF) model) or the local frequency of alternations between these items (i.e. p(alt.) = 1 À p(rep.), alternation frequency (AF) model)

Read more

Summary

Introduction

From seasons’ cycle to speech, events in the environment are rarely independent from one another; instead they are often structured in time. The most basic level of coding used to represent sequential inputs is the encoding of statistical regularities, such as item frequency and transition probabilities. Many experiments demonstrate that the brain possesses powerful statistical learning mechanisms that extract such regularities from sequential inputs (Armstrong et al, 2017; Santolin and Saffran, 2018). In infants, learning the transition probabilities between syllables seems to be a building block on top of which words and syntactic tree structures are built (Saffran et al, 1996; Chomsky and Ronat, 1998). There is no single answer to those questions because different brain processes estimate different statistics computed over different timescales. We show how learning models, fitted to brain responses, can help tease apart those processes

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call