Abstract

The effective classification for imagined speech and intended speech is of great help to the development of speech-based brain-computer interfaces (BCIs). This work distinguished imagined speech and intended speech by employing the cortical EEG signals recorded from scalp. EEG signals from eleven subjects were recorded when they produced Mandarin-Chinese monosyllables in imagined speech and intended speech, and EEG features were classified by the common spatial pattern, time-domain, frequency-domain and Riemannian manifold based methods. The classification results indicated that the Riemannian manifold based method yielded the highest classification accuracy of 85.9% among the four classification methods. Moreover, the classification accuracy with the left-only brain electrode configuration was close to that with the whole brain electrode configuration. The findings of this work have potential to extend the output commands of silent speech interfaces.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.