Abstract

Many researchers have tried to decode talked and guessed speech instantly from brain signals towards the growth of a raw-speech BCI. This paper intends to feature extraction and decoding, using the electrocorticoscope (ECoG), the auditory and articulatory features of the motor cortex. Consonants were selected as auditory depictions, and both positions of articulation and manners of articulation were selected as articulatory depictions. The auditory and articulatory representations were decoded at different time lags concerning the speech onset to find optimal temporal decoding parameters. Moreover, this work explores the role of the temporal lobe during speech production directly from ECoG signals. Also, their temporal propagation before and after the speech onset was performed using classification and statistical tests. A novel decoding model using temporal lobe activity was developed to predict a spectral representation of the speech envelope during speech production. Deep learning was utilized in our analysis. This new knowledge may be used to enhance existing speech-based BCI systems, which will offer a more natural communication modality. Also, the work contributes to the field of speech neurophysiology by providing a better understanding of speech processes in the brain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call