Abstract

Speech production involves the synchronization of neural activity between the speech centers of the brain and the oral-motor system, allowing for the conversion of thoughts into meaningful sounds. This hierarchical mechanism is hindered due to partial or complete paralysis of the articulators for patients suffering from locked-in-syndrome. These patients are in dire need of effective brain-communication interfaces (BCIs), which can at least provide a level of communication assistance. In this study, we tried to decode overt (loud) speech directly from the brain via non-invasive magnetoen-cephalography (MEG) signals to build the foundation for a faster, direct brain to text mapping BCI. A shallow Artificial Neural Network (ANN) was trained with wavelet features of the MEG signals for this objective. Experimental results show that a direct speech decoding is possible from MEG signals. Moreover, we found that the jaw motion and MEG signals may have complimentary information for speech decoding.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call