Abstract

Dysarthria is a condition in which the muscles used for speech get weakened due to neurological disorders such as facial paralysis, stroke, and brain injury. Owing to this, patients suffering from this disease could not convey their basic needs to their caretakers. A recent era of research empowers this issue using a brain-computer interface (BCI). BCI is a recent breakthrough invention in the field of neuroscience that interprets the human brain signal into machine-understandable instruction. The main objective of this chapter is to provide a systematic study on various deep learning and machine learning techniques that could be used to process electroencephalography (EEG) signals to develop communication tools for paralytic people as well as future work in this field of study. The main advantage of using deep learning techniques is the reduced computation cost by avoiding the need for feature extraction from electrophysiological signals. The earlier literature on deep learning states that it does not work well with small datasets and may not be suitable for an EEG dataset taken from few subjects for any healthcare-related analysis. Even though deep learning has been widely used with various data analytics, it is less explored with EEG data. On the contrary, recent studies have proven that deep learning provides better performance even on a dataset with few samples. Hence, this chapter explores the feasibility of using deep learning and machine learning techniques for EEG-based communication systems. It also discusses future research directions and challenges in detail to enable the researchers in this domain to explore further.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call