Understanding brain processing mechanisms from the perception of speech sounds to high-level semantic processing is vital for effective human–robot communication. In this study, 128-channel electroencephalograph (EEG) signals were recorded when subjects were listening to real and pseudowords in Mandarin. By using an EEG source reconstruction method and a sliding-window Granger causality analysis, we analyzed the dynamic brain connectivity patterns. Results showed that the bilateral temporal cortex (lTC and rTC), the bilateral motor cortex (lMC and rMC), the frontal cortex (FC), and the occipital cortex (OC) were recruited in the process, with complex patterns in the real word condition than in the pseudoword condition. The spatial pattern is basically consistent with previous functional MRI studies on the understanding of spoken Chinese. For the real word condition, speech perception and processing involved different connection patterns in the initial phoneme perception and processing phase, the phonological processing and lexical selection phase, and the semantic integration phase. Specifically, compared with pseudowords, a hub region in the FC and unique patterns of lMC → rMC and lTC → FC connectivity were found during processing real words after 180 ms, while a distributed network of temporal, motor, and frontal brain areas was involved after 300 ms. This may be related to semantic processing and integration. The involvement of both bottom-up input and top-down modulation in real word processing may support the previously proposed TRACE model. In sum, the findings of this study suggest that representations of speech involve dynamic interactions among distributed brain regions that communicate through time-specific functional networks.
Read full abstract