Abstract

SummaryFigure-ground segregation, the brain’s ability to group related features into stable perceptual entities, is crucial for auditory perception in noisy environments. The neuronal mechanisms for this process are poorly understood in the auditory system. Here, we report figure-ground modulation of multi-unit activity (MUA) in the primary and non-primary auditory cortex of rhesus macaques. Across both regions, MUA increases upon presentation of auditory figures, which consist of coherent chord sequences. We show increased activity even in the absence of any perceptual decision, suggesting that neural mechanisms for perceptual grouping are, to some extent, independent of behavioral demands. Furthermore, we demonstrate differences in figure encoding between more anterior and more posterior regions; perceptual saliency is represented in anterior cortical fields only. Our results suggest an encoding of auditory figures from the earliest cortical stages by a rate code.

Highlights

  • Figure-ground segregation of natural scenes is essential for directing behavior, independent of the sensory modality

  • The complexity of natural acoustic scenes can be modeled with stochastic figure-ground (SFG) stimuli, in which temporally coherent figure elements are segregated from random masker elements that overlap in frequency-time space

  • Figureground segregation seems to be susceptible to cognitive load across modalities; high visual load reduces auditory cortical activity to auditory figures (Molloy et al, 2018). These findings suggest that the grouping of figure elements is possible without attention being directed to the sound but that the perception of the auditory object is facilitated by attentional modulation of brain responses

Read more

Summary

Introduction

Figure-ground segregation of natural scenes is essential for directing behavior, independent of the sensory modality. The perception of separated auditory objects in noisy scenes requires the brain to detect, segregate, and group sound elements that belong to the same figure or object (Bizley and Cohen, 2013; Griffiths and Warren, 2004). This process is related to stream segregation, for which cognitive processes cause perceptual organization of incoming sound. Figure detection correlates with speech-in-noise detection irrespective of hearing thresholds for pure tones (Holmes and Griffiths, 2019) Both SFG detection and speech-in-noise detection require cortical brain mechanisms (Holmes et al, 2019), highlighting the importance of central grouping mechanisms in normal hearing

Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.