Abstract
ABSTRACTThere is increasing interest in developing intuitive brain-computer interfaces (BCIs) to differentiate intuitive mental tasks such as imagined speech. Both electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) have been used for this purpose. However, the classification accuracy and number of commands in such BCIs have been limited. The use of multi-modal BCIs to address these issues has been proposed for some common BCI tasks, but not for imagined speech. Here, we propose a multi-class hybrid fNIRS-EEG BCI based on imagined speech. Eleven participants performed multiple iterations of three tasks: mentally repeating ‘yes’ or ‘no’ for 15 s or an equivalent duration of unconstrained rest. We achieved an average ternary classification accuracy of 70.45 ± 19.19% which is significantly better than that attained with each modality alone (p < 0.05). Our findings suggest that concurrent measurements of EEG and fNIRS can improve classification accuracy of BCIs based on imagined speech.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.