Abstract

It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians’ higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions used by participants when performing phonemic categorizations in noise. Here we used this technique on 19 non-musicians and 19 professional musicians. We found that both groups used very similar listening strategies, but the musicians relied more heavily on the two main acoustic cues, at the first formant onset and at the onsets of the second and third formants onsets. Additionally, they responded more consistently to stimuli. These observations provide a direct visualization of auditory plasticity resulting from extensive musical training and shed light on the level of functional transfer between auditory processing and speech perception.

Highlights

  • Duration discrimination[14], enhanced auditory attention[13], facilitated pitch processing[15,16], better detection of tones masked by noise[13] and greater perceptual acuity of rapid spectro-temporal changes[17]

  • Auditory Classification Image (ACI) are a behavioral counterpart of spectro-temporal receptive fields (STRFs), a widely used model to capture the relationship between the acoustic characteristics of a stimulus and the firing of a specific auditory neuron[45,46]

  • Nineteen musicians and 19 normal-hearing participants with no musical practice were asked to discriminate the final syllable in 4 non-word stimuli

Read more

Summary

Introduction

Duration discrimination[14], enhanced auditory attention[13], facilitated pitch processing[15,16], better detection of tones masked by noise[13] and greater perceptual acuity of rapid spectro-temporal changes[17] This behavioral evidence for the benefit of musical experience on basic auditory skills has been supplemented with a series of electrophysiological studies. The experimental paradigm consists of introducing random fluctuations to the stimulus and measuring the influence of these fluctuations on the participant’s behavior This approach reveals how noise masking of different “parts” of the sound biases the listener toward a specific response. The most recent developments in the field involve penalized GLMs ( called generalized additive models)[39,42,43] This last technique offers sufficient power to translate Classification Images back to the auditory domain. In addition to the expected acoustic cues present in the F2 and F3 onsets, which have been previously determined through other methods, the authors were able to reveal a neglected source of information about the identity of the stimulus: the F1 onset

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call