Abstract

Imagined Speech (IS) is the imagination of speech without using the tongue or muscles. In recent studies, IS tasks are increasingly investigated for the Brain-Computer Interface (BCI) applications. Electroencephalography (EEG) signals, which record brain activity, can be used to analyze BCI-based tasks utilizing Machine Learning (ML) methods. The current paper considers decoding IS brain waves using the fusion of classical signal processing, Graph Signal Processing (GSP), and Graph Learning (GL) based features. The proposed fusion method, named GraphIS (short for a Graph-based Imagined Speech BCI decoder), is applied to the four-class classification (three classes of the imagined words, in addition to the rest state) on EEG recordings of fifteen subjects. Results show that GSP and GL-based features can highly improve the performance of classification outcomes compared to using only classical signal processing features and over the state-of-the-art Common Spatial Pattern (CSP) feature extractor by considering the spatial information of the signals as well as interactions between channels in regions of interest. The proposed GraphIS method leads to a mean accuracy of 50.10% in the studied four-class IS classification task, compared to using only one feature set with an accuracy of 47.86% and 46.10%, and also the state-of-the-art CSP with an accuracy of 47.10%. Additionally, using an EEG connectivity map of the electrode signals obtained from GL methods, we also found a strong connection in the right frontal region as well as in the left frontal regions during IS, which had not been focused on in the previous IS papers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call