Abstract

BackgroundConstructivist theories propose that articulatory hypotheses about incoming phonetic targets may function to enhance perception by limiting the possibilities for sensory analysis. To provide evidence for this proposal, it is necessary to map ongoing, high-temporal resolution changes in sensorimotor activity (i.e., the sensorimotor μ rhythm) to accurate speech and non-speech discrimination performance (i.e., correct trials.)MethodsSixteen participants (15 female and 1 male) were asked to passively listen to or actively identify speech and tone-sweeps in a two-force choice discrimination task while the electroencephalograph (EEG) was recorded from 32 channels. The stimuli were presented at signal-to-noise ratios (SNRs) in which discrimination accuracy was high (i.e., 80–100%) and low SNRs producing discrimination performance at chance. EEG data were decomposed using independent component analysis and clustered across participants using principle component methods in EEGLAB.ResultsICA revealed left and right sensorimotor µ components for 14/16 and 13/16 participants respectively that were identified on the basis of scalp topography, spectral peaks, and localization to the precentral and postcentral gyri. Time-frequency analysis of left and right lateralized µ component clusters revealed significant (pFDR<.05) suppression in the traditional beta frequency range (13–30 Hz) prior to, during, and following syllable discrimination trials. No significant differences from baseline were found for passive tasks. Tone conditions produced right µ beta suppression following stimulus onset only. For the left µ, significant differences in the magnitude of beta suppression were found for correct speech discrimination trials relative to chance trials following stimulus offset.ConclusionsFindings are consistent with constructivist, internal model theories proposing that early forward motor models generate predictions about likely phonemic units that are then synthesized with incoming sensory cues during active as opposed to passive processing. Future directions and possible translational value for clinical populations in which sensorimotor integration may play a functional role are discussed.

Highlights

  • It is well known that the acoustic speech signal does not directly map onto perceived speech-sound categories

  • The study provides evidence supporting claims that these internal models operate to a kind of phonological or articulatory selective attention [10,11,7]. The finding that both correct and chance syllable discrimination trials were preceded by early m suppression is what would be expected if forward articulatory models function to selective attention

  • The study provides further evidence that early forward models are related to perceptual performance at the point in time when acoustic features are sufficient for comparison with initial hypotheses in a manner similar to ‘analysis-by-synthesis.’

Read more

Summary

Introduction

It is well known that the acoustic speech signal does not directly map onto perceived speech-sound categories. This phenomenon is known as a ‘many-to-many’ mapping between acoustic correlates and phonemic units. Despite the complex relationship between acoustic features and perception, humans successfully process speech even when acoustic cues are mixed with background noise. The process by which categorical percepts are recovered from variable acoustic cues has been long been a matter of debate and is known as the ‘lack of invariance problem’ [3,4,5,6,7,8,9,10,11]. To provide evidence for this proposal, it is necessary to map ongoing, high-temporal resolution changes in sensorimotor activity (i.e., the sensorimotor m rhythm) to accurate speech and non-speech discrimination performance (i.e., correct trials.)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call