Abstract

Communication is essential to human behavior. Individuals with amyotrophic lateral sclerosis and severe traumatic brain injury encounter enormous challenges in communicating via speaking, writing or typing with others. Such lack of communication severely limits their independence and quality of life. Recent research employing invasive and expensive surgical implants of neuroprosthesis shows some degree of success in reconstructing language or images. However, these procedures are not user-friendly and not available for all affected individuals. To address these limitations, the researcher created a convolutional neural network model to classify individually thought letters – i.e., subvocalization. Using a commercially available simple electroencephalography (EEG) device data were collected from a study participant. All collected data was formatted for computer interpretation and uploaded to a Python notebook. The data was then augmented and preprocessed to make it easier for the neural network model to make predictions. The convolutional neural network model with the best performance had 63.33% classifying 38/60 samples correctly with a statistically significant p value (p<0.00001). Limitations include the reliability of the EEG headset used and time restrictions for collecting data. For future research, better hyperparameters will be investigated using additional data. Finally, the researcher plans to conduct experiments to make real-time predictions with the model once higher accuracies are achieved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call