Abstract

Humans are unique in their ability to communicate using spoken language. However, it remains unclear how distinct acoustic features of speech sounds are represented in the auditory pathway over time. In this study, we applied a novel analysis technique to electroencephalography (EEG) signals as subjects listened to continuous speech and characterize the neural representation of acoustic features and the progression of responses over time. We averaged the time-aligned neural responses to phoneme instances and calculated a phoneme-related potential (PRP). We show that phonemes in continuous speech evoke multiple observable responses which are clearly separated in time. These recurrent responses have different scalp distributions, and occur as early as 50 ms, and as late as 400 ms after the phoneme onset. We show that the responses explicitly represent the acoustic distinctions of phonemes, and that linguistic and non-linguistic information appear at different time intervals. Finally, we show a joint encoding of phonetic and speaker information, where the neural representation of speakers is dependent on phoneme category. This study provides evidence for a dynamic neural transformation of low-level speech features and form an empirical framework to study the representational changes in learning, attention, and speech disorders.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call