Abstract

Recent studies utilizing electrophysiological speech envelope reconstruction have sparked renewed interest in the cocktail party effect by showing that auditory neurons entrain to selectively attended speech. Yet, the neural networks of attention to speech in naturalistic audiovisual settings with multiple sound sources remain poorly understood. We collected functional brain imaging data while participants viewed audiovisual video clips of lifelike dialogues with concurrent distracting speech in the background. Dialogues were presented in a full-factorial design, comprising task (listen to the dialogues vs. ignore them), audiovisual quality and semantic predictability. We used univariate analyses in combination with multivariate pattern analysis (MVPA) to study modulations of brain activity related to attentive processing of audiovisual speech. We found attentive speech processing to cause distinct spatiotemporal modulation profiles in distributed cortical areas including sensory and frontal-control networks. Semantic coherence modulated attention-related activation patterns in the earliest stages of auditory cortical processing, suggesting that the auditory cortex is involved in high-level speech processing. Our results corroborate views that emphasize the dynamic nature of attention, with task-specificity and context as cornerstones of the underlying neuro-cognitive mechanisms.

Highlights

  • Listening and comprehending speech in noisy environments is so effortless for humans that we often ignore its computational demands

  • In the current study we found that activity patterns related to attentive AV speech processing (AttnAVSMs) in the auditory cortex and its vicinity were modulated by both the semantic coherence and the AV quality of the video stimuli

  • The present results suggest that attentive processing of AV speech in a cocktail-party-like setting is associated with distinct modulation of neuronal responses in both sensory and other cortical regions that follow predictable temporal profiles

Read more

Summary

Introduction

Listening and comprehending speech in noisy environments is so effortless for humans that we often ignore its computational demands. Attention-related modulation of neural responses to speech sounds has classically been presumed to comprise simple mechanisms that non- increase the gain and fidelity of neuronal responses (Briggs et al, 2013). This view has, changed during the last decade thanks to methodological advances that enable studying selective attention in ecologically valid settings with multiple sound sources. Functional magnetic resonance imaging (fMRI) studies suggest that attention modulates processing in auditory cortex (here core, belt and parabelt, Moerel et al, 2014) and surrounding regions when listening to speech in the presence of noise (e.g., Alho et al, 2006; Alho et al, 2003 )

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call