Abstract

Classification of EEG signals is a cornerstone of building the motor-imagery (MI) based Brain-computer interface (BCI) systems. EEG signals differ from one subject to another and even for the same subject among different trials, and this is why designing a general classification model is still debated. Deep learning is dominant in so many fields like computer vision and natural language processing but it is still under investigation for EEG signals classification. We followed a new trend in EEG signals classification in which these signals are transformed into images, and so classifying such signals become an image classification problem where Deep learning can work well. The Physionet dataset for EEG motor movement/imagery tasks was used which consists of 109 subjects and the motor imagery EEG signals for three frequency bands (Delta [0.5-4 Hz], Mu [8–13 Hz], and Beta [13–30 Hz]) was transformed into 3-channel images (one channel for each band) using the Azimuthal equidistant projection and Clough-Tocher algorithm for interpolation. These 2-D images represent the input data to our model which consists of Deep Convolutional Neural Network (DCNN) to extract the spatial and frequency features followed by Long Short Term Memory (LSTM) to extract temporal features and then finally to be classified into 5 different classes (4 motor imagery tasks and one rest). Our results were promising (70.64% average accuracy) and 5% better than the results of Support Vector Machine (SVM) method over the same dataset. We noticed that taking Delta band into account increases the classification accuracy by 2.51%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call