Abstract

Brain-computer interfaces that directly decode speech could restore communication to locked-in individuals. However, decoding speech from brain signals still faces many challenges. We investigated decoding of phonemes - the smallest separable parts of speech - from ECoG signals during word production. We expanded on previous efforts to identify specific phoneme by identifying phonemes by where in the word they were formed. We evaluated how the context of phonemes in words affects classification results using linear discriminant analysis. The decoding accuracy of our linear classifier indicated the degree to which the context of a phoneme can be determined from the cortical signal significantly greater than chance. Further, we identified the spectrotemporal features that contributed most to successful decoding of phonemic classes. Finally, we discuss how this can augment speech decoding for neural interfaces.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.