Abstract

To improve the spatial resolution of the electroen-cephalogram (EEG) signal, it is conventional to use a large number of scalp electrode while recording oscillatory rhythms in motor imagery based brain-computer interface(MI-BCI). However, this increases the dimension of data and might fail to generalize and thus over-fit. Therefore, it is required to reduce the dimension of input data in an optimal way. In this paper, we propose a method using an artificial neural network to reduce the dimension of EEG for MI-BCI. We train an under-complete sparse autoencoder neural network for each subject separately to encode the EEG data optimally. The optimally encoded EEG trials are then used by the Filter Bank Common Spatial Pattern (FBCSP) method to decode the imagined motor movement. In similar lines, autoencoder was trained on subject independent data. We achieved improved motor imagery classification accuracy when the dimension of the data was almost reduced by half compared to the state-of art FBCSP. The performance of the proposed method is also compared with Sparse Common Spatial Pattern (SCSP) based channel selection method. The average classification accuracy obtained for 10 subjects is 74.3±8.06 % with only 13 encoded channels. Also, for the autoencoder trained to be subject independent we obtained an average classification accuracy of 66.64±3.93% with only 11 encoded channels after crossvalidation. The study extends the use of autoencoder neural networks in motor imagery based brain-computer interface and shows significant improvement in performance with reduced data dimension.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call