Abstract

The neural representation of continuous speech in human auditory cortex was obtained noninvasively via magnetoencephalography. The neural response was recorded from human subjects listening to a spoken narrative, either in clean or in the presence of interfering speech. The cortical neural response to clean speech is demonstrated to precisely track the slow temporal modulations (<10 Hz) of speech in a broad spectral region between 400 Hz and 2 kHz. The neural code is sufficiently faithful to decode acoustic features of speech. To examine the robustness of, and the role of attention in, this neural code, another spoken narrative was presented simultaneously, either to a different ear (dichotically) or to the same ear (diotically), instructing the subjects to focus on only one of the two speech signals. The cortical representation of the attended speech is found to be substantially stronger than that of the unattended speech. This attentional effect is significant during the subjects’ first exposure to the spoken narratives. These results demonstrate that auditory cortex precisely represents the slow temporal modulations of speech and maintains separate neural representations for concurrent speech signals that can be individually and strongly modulated by attention. [Work supported by the NIDCD Grant No. R01‐DC‐008342.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call