Abstract

A speech imagery-based brain computer interface (BCI) provides an alternative way for people to interact with the outside world intuitively. Most speech imagery BCIs based on functional near-infrared spectroscopy (fNIRS) are disadvantageous in applications outside the laboratory due to being incapable of detecting asynchronous (self-paced) actions. This work aimed to develop a two-class asynchronous BCI that detects idle state in the context of motor imagery of articulation (MIA). In this study, 19 healthy subjects were asked to rehearse the Chinese vowels/a/and/u/covertly and to accommodate a rest state. A feature selection strategy was designed to combine time domain, spatial domain, and functional connectivity features. All differentiating information was extracted from fNIRS signals in a 0–2.5 s time window. Among single-modality features, the centrality features of brain networks performed better than any other features and yielded a subject-dependent classification accuracy rate of 75.1%. The subject-dependent classification accuracy rate of the combined features at 78.9% was significantly better than that of any single feature. These results demonstrate that it is feasible to detect idle states from active MIA states in a reduced time window size. The proposed combination of information propagation pattern, spatial pattern, and activation pattern was meaningful for extracting discriminative information, hence improving classification accuracy. The high classification accuracy and fast information transfer rate of the presented BCI show promising practicality for real world applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call