Abstract

Recent studies have demonstrated that it is possible to decode and synthesize acoustic speech directly from intracranial measurements of brain activity. A current major challenge is to extend the efficacy of this decoding to imagined speech processes toward the development of a practical speech neuroprosthesis for the disabled. The present study used intracranial brain recordings from participants that performed a speaking task consisting of overt, mouthed, and imagined speech trials. In order to better elucidate the unique neural features that contribute to the discrepancies between overt and imagined model performance, rather than directly comparing the performance of speech decoding models trained on respective speaking modes, this study developed and trained models that use neural data to discriminate between pairs of speaking modes. The results further support that, while there exists a common neural substrate across speech modes, there are also unique neural processes that differentiate speech modes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call