Abstract

To form a percept of the environment, the brain needs to solve the binding problem—inferring whether signals come from a common cause and are integrated or come from independent causes and are segregated. Behaviourally, humans solve this problem near-optimally as predicted by Bayesian causal inference; but the neural mechanisms remain unclear. Combining Bayesian modelling, electroencephalography (EEG), and multivariate decoding in an audiovisual spatial localisation task, we show that the brain accomplishes Bayesian causal inference by dynamically encoding multiple spatial estimates. Initially, auditory and visual signal locations are estimated independently; next, an estimate is formed that combines information from vision and audition. Yet, it is only from 200 ms onwards that the brain integrates audiovisual signals weighted by their bottom-up sensory reliabilities and top-down task relevance into spatial priority maps that guide behavioural responses. As predicted by Bayesian causal inference, these spatial priority maps take into account the brain’s uncertainty about the world’s causal structure and flexibly arbitrate between sensory integration and segregation. The dynamic evolution of perceptual estimates thus reflects the hierarchical nature of Bayesian causal inference, a statistical computation, which is crucial for effective interactions with the environment.

Highlights

  • In our natural environment, our senses are exposed to a barrage of sensory signals: the sight of a rapidly approaching truck, its looming motor noise, the smell of traffic fumes

  • The ability to tell whether various sensory signals come from the same or different sources is essential for forming a coherent percept of the environment

  • We combined Bayesian modelling, electroencephalography (EEG), and multivariate decoding in an audiovisual spatial localisation task to show that the brain dynamically encodes multiple spatial estimates while accomplishing Bayesian causal inference

Read more

Summary

Introduction

Our senses are exposed to a barrage of sensory signals: the sight of a rapidly approaching truck, its looming motor noise, the smell of traffic fumes. Hierarchical Bayesian causal inference provides a rational strategy to arbitrate between sensory integration and segregation in perception [2]. Bayesian causal inference explicitly models the potential causal structures that could have generated the sensory signals—i.e., whether signals come from common or independent sources. The brain does not know the world’s causal structure that gave rise to the sensory signals. To account for this causal uncertainty, a final estimate (e.g., object’s location) is obtained by averaging the estimates under the two causal structures (i.e., common versus independent source models) weighted by each causal structure’s posterior probability—a strategy referred to as model averaging (for other decisional strategies, see [13])

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call