Brain computer interface (BCI) based on speech imagery can help people with motor disorders communicate their thoughts to the outside world in a natural way. Due to being portable, non-invasive, and safe, functional near-infrared spectroscopy (fNIRS) is preferred for developing BCIs. Previous BCIs based on fNIRS mainly relied on activation information, which ignored the functional connectivity between neural areas. In this study, a 4-class speech imagery BCI based on fNIRS is presented to decode simplified articulation motor imagery (only the movements of jaw and lip were retained) of different vowels. Synchronization information in the motor cortex was extracted as features. In multiclass (four classes) settings, the mean subject-dependent classification accuracies approximated or exceeded 40% in the 0-2.5 s and 0-10 s time windows, respectively. In binary class settings (the average classification accuracies of all pairwise comparisons between two vowels), the mean subject-dependent classification accuracies exceeded 70% in the 0-2.5 s and 0-10 s time windows. These results demonstrate that connectivity features can effectively differentiate different vowels even if the time window size was reduced from 10 s to 2.5 s and the decoding performance in both the time windows was almost the same. This finding suggests that speech imagery BCI based on fNIRS can be further optimized in terms of feature extraction and command generation time reduction. In addition, simplified articulation motor imagery of vowels can be distinguished, and therefore, the potential contribution of articulation motor imagery information extracted from the motor cortex should be emphasized in speech imagery BCI based on fNIRS to improve decoding performance.
Read full abstract