Abstract

Speech is a complex sound sequence that has rich acoustic and linguistic structures. Recent studies have suggested that low-frequency cortical activity can track linguistic units in speech, such as words and phrases, on top of low-level acoustic features. Here, with an artificial word learning paradigm, we investigate how different aspects of linguistic information, e.g., phonological, semantic, and orthographic information, modulate cortical tracking of words. Participants are randomly assigned to the experimental group or the control group. Both groups listen to speech streams composed of trisyllabic artificial words or trisyllabic real words. Participants in the experimental group explicitly learn different types of linguistic information of artificial words (phonological, phonological + semantic, or phonological + orthographic information), while participants in the control group do not explicitly learn the words. Electroencephalographic (EEG) data from the control group reveal weaker cortical tracking of artificial words than real words. However, when comparing the experimental and control groups, we find that explicit learning significantly improves neural tracking of artificial words. After explicit learning, cortical tracking of artificial words is comparable to real words, regardless of the training conditions. These results suggest training facilitates neural tracking of words and emphasize the basic role phonological information played in sequential grouping.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call