Abstract

Functional Magnetic Resonance Imaging (fMRI) was used to investigate the extent, magnitude, and pattern of brain activity in response to rapid frequency-modulated sounds. We examined this by manipulating the direction (rise vs. fall) and the rate (fast vs. slow) of the apparent pitch of iterated rippled noise (IRN) bursts. Acoustic parameters were selected to capture features used in phoneme contrasts, however the stimuli themselves were not perceived as speech per se. Participants were scanned as they passively listened to sounds in an event-related paradigm. Univariate analyses revealed a greater level and extent of activation in bilateral auditory cortex in response to frequency-modulated sweeps compared to steady-state sounds. This effect was stronger in the left hemisphere. However, no regions showed selectivity for either rate or direction of frequency modulation. In contrast, multivoxel pattern analysis (MVPA) revealed feature-specific encoding for direction of modulation in auditory cortex bilaterally. Moreover, this effect was strongest when analyses were restricted to anatomical regions lying outside Heschl's gyrus. We found no support for feature-specific encoding of frequency modulation rate. Differential findings of modulation rate and direction of modulation are discussed with respect to their relevance to phonetic discrimination.

Highlights

  • During verbal communication, our auditory system is charged with the task of sorting through a complex acoustic stream in order to identify relevant stimulus features, and integrating this information into a unified phonetic percept that can allow us to perceive the incoming message

  • A significant cluster was observed in the right supramarginal gyrus (SMG)

  • We used Functional Magnetic Resonance Imaging (fMRI) to examine the organization of human auditory cortex for processing frequency modulated sounds

Read more

Summary

Introduction

Our auditory system is charged with the task of sorting through a complex acoustic stream in order to identify relevant stimulus features, and integrating this information into a unified phonetic percept that can allow us to perceive the incoming message This process occurs amidst competing sources of information and significant variability in how a given speech sound is produced both within- and betweenspeakers. The signal is amplitude modulated such that its intensity is rapidly changing and fast fading, and it is frequency modulated so that spectral information changes at a rapid rate This multicomponent nature of the acoustic speech signal makes it unique in the domain of auditory processing

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call