Abstract

Brain-computer interface (BCI) provides a platform for humans to communicate using Electroencephalogram (EEG) signals by converting them into commands that can be used by the output device to perform the desired tasks. This paper focuses on the identification of vowels from EEG signals. First, a dataset of EEG signals has been created for the identification of vowels by collecting data using a 14-channel EEG device Emotiv- epoc+ from 16 subjects. Then, a deep learning-based model is proposed using a multi-headed Convolutional Neural Network for feature extraction and classification of imagined speech of vowels. Butterworth lowpass and bandpass filter of order five are implemented for denoising and sub-banding of the EEG signals which are further pre-processed using Hilbert Huang Transform. The model has achieved an average accuracy of 97.67% with a five-fold cross-validation technique using all six sub-bands of the EEG signals. The model has achieved an average precision and recall of 95.54% and 95.11% respectively. The proposed model is statistically tested using the Mann-Whitney U test and paired t-test with a p-value less than 0.05.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call