Abstract

Brain–computer interface (BCI) based multilingual silent reading electroencephalography (EEG) decoding technology provides a convenient and fast communication method for multilingual patients with language disorders. Developing a silent reading EEG signals decoding method that can be applied to multiple languages, it is necessary not only to select the appropriate feature channels for each language but also to distinguish the spatial and temporal relationships between each channel. The existing graph convolution network that learns the characteristics and connections of each node on a fixed graph is inadequate to capture both the temporal and spatial relationships between each channel. Therefore, we use a feature matrix and its adaptive graph to represent each EEG signal and propose a Cross-fusion Adaptive Graph Convolution Network (CFA-GCN) for decoding. We collect novel EEG signal datasets of a subject who silently read 7 Chinese and 9 English words over 26 days, which can be used for further research and development of BCI systems. The proposed method achieves state-of-the-art performance with 45.30% and 47.62% decoding accuracy for silent reading EEG signals of English and Chinese words, respectively. This work demonstrates the feasibility of decoding silent reading EEG signals from bilingual words. The CFA-GCN demonstrates the potential of utilizing Brain–computer Interface (BCI) systems to facilitate effective communication for patients with multilingual impairments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call