Abstract
Speech production involves the synchronization of neural activity between the speech centers of the brain and the oral-motor system, allowing for the conversion of thoughts into meaningful sounds. This hierarchical mechanism is hindered due to partial or complete paralysis of the articulators for patients suffering from locked-in-syndrome. These patients are in dire need of effective brain-communication interfaces (BCIs), which can at least provide a level of communication assistance. In this study, we tried to decode overt (loud) speech directly from the brain via non-invasive magnetoen-cephalography (MEG) signals to build the foundation for a faster, direct brain to text mapping BCI. A shallow Artificial Neural Network (ANN) was trained with wavelet features of the MEG signals for this objective. Experimental results show that a direct speech decoding is possible from MEG signals. Moreover, we found that the jaw motion and MEG signals may have complimentary information for speech decoding.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.