Abstract

Brain-computer interface (BCI) systems are proposed as a means of communication for locked-in patients. One common BCI paradigm is motor imagery in which the user controls a BCI by imagining movements of different body parts. It is known that imagining different body parts results in event-related desynchronization (ERD) in various frequency bands. Existing methods such as common spatial patterns (CSP) and its refinement filterbank common spatial patterns (FB-CSP) aim at finding features that are informative for classification of the motor imagery class. Our proposed method is a temporally adaptive common spatial patterns implementation of the commonly used filter-bank common spatial patterns method using convolutional neural networks; hence it is called TA-CSPNN. With this method we aim to: (1) make the feature extraction and classification end-to-end, (2) base it on the way CSP/FBCSP extracts relevant features, and finally, (3) reduce the number of trainable parameters compared to existing deep learning methods to improve generalizability in noisy data such as EEG. More importantly, we show that this reduction in parameters does not affect performance and in fact the trained network generalizes better for data from some participants. We show our results on two datasets, one publicly available from BCI Competition IV, dataset 2a and another in-house motor imagery dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call