Abstract

Electroencephalograph (EEG) brain-computer interfaces (BCI) have potential to provide new paradigms for controlling computers and devices. The accuracy of brain pattern classification in EEG BCI is directly affected by the quality of features extracted from EEG signals. Currently, feature extraction heavily relies on prior knowledge to engineer features (for example from specific frequency bands); therefore, better extraction of EEG features is an important research direction. In this work, we propose an end-to-end deep neural network that automatically finds and combines features for motor imagery (MI) based EEG BCI with 4 or more imagery classes (multi-task). First, spectral domain features of EEG signals are learned by compact convolutional neural network (CCNN) layers. Then, gated recurrent unit (GRU) neural network layers automatically learn temporal patterns. Lastly, an attention mechanism dynamically combines (across EEG channels) the extracted spectral-temporal features, reducing redundancy. We test our method using BCI Competition IV-2a and a data set we collected. The average classification accuracy on 4-class BCI Competition IV-2a was 85.1 % ± 6.19 %, comparable to recent work in the field and showing low variability among participants; average classification accuracy on our 6-class data was 64.4 % ± 8.35 %. Our dynamic fusion of spectral-temporal features is end-to-end and has relatively few network parameters, and the experimental results show its effectiveness and potential.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call