Abstract

Figure-ground segregation is fundamental to listening in complex acoustic environments. An ongoing debate pertains to whether segregation requires attention or is “automatic” and preattentive. In this magnetoencephalography study, we tested a prediction derived from load theory of attention (e.g., Lavie, 1995) that segregation requires attention but can benefit from the automatic allocation of any “leftover” capacity under low load. Complex auditory scenes were modeled with stochastic figure-ground stimuli (Teki et al., 2013), which occasionally contained repeated frequency component “figures.” Naive human participants (both sexes) passively listened to these signals while performing a visual attention task of either low or high load. While clear figure-related neural responses were observed under conditions of low load, high visual load substantially reduced the neural response to the figure in auditory cortex (planum temporale, Heschl's gyrus). We conclude that fundamental figure-ground segregation in hearing is not automatic but draws on resources that are shared across vision and audition.SIGNIFICANCE STATEMENT This work resolves a long-standing question of whether figure-ground segregation, a fundamental process of auditory scene analysis, requires attention or is underpinned by automatic, encapsulated computations. Task-irrelevant sounds were presented during performance of a visual search task. We revealed a clear magnetoencephalography neural signature of figure-ground segregation in conditions of low visual load, which was substantially reduced in conditions of high visual load. This demonstrates that, although attention does not need to be actively allocated to sound for auditory segregation to occur, segregation depends on shared computational resources across vision and hearing. The findings further highlight that visual load can impair the computational capacity of the auditory system, even when it does not simply dampen auditory responses as a whole.

Highlights

  • Figure-ground segregation, the process by which an auditory object is perceptually extracted from the aggregate sound input, underlies key aspects of listeners’ ability to make sense of the acoustic environment, including recognizing individual soundsReceived Sept. 24, 2018; revised Nov. 19, 2018; accepted Nov. 19, 2018

  • In the MEG studies, we would expect a much more substantial effect on awareness, in line with the strong effects demonstrated on the figure-related negativity (FRN) response

  • Gutschalk et al (2008) showed that brain responses evoked by a tone stream (“target”) embedded within a tone cloud were substantially enhanced when listeners actively attended to the target relative to when attention was directed to an unrelated stimulus in the other ear or during passive listening

Read more

Summary

Introduction

Figure-ground segregation, the process by which an auditory object is perceptually extracted from the aggregate sound input, underlies key aspects of listeners’ ability to make sense of the acoustic environment, including recognizing individual soundsReceived Sept. 24, 2018; revised Nov. 19, 2018; accepted Nov. 19, 2018. Figure-ground segregation, the process by which an auditory object is perceptually extracted from the aggregate sound input, underlies key aspects of listeners’ ability to make sense of the acoustic environment, including recognizing individual sounds. K.M., N.L., and M.C. designed research; K.M. performed research; M.C. contributed unpublished reagents/analytic tools; K.M., N.L., and M.C. wrote the paper. Whether segregation depends on attention has been a longstanding question in hearing research (Shamma and Micheyl, 2010; Shamma et al, 2011; Snyder et al, 2012; Puvvada and Simon, 2017), but despite decades of debate, the answer has remained elusive

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call