Abstract

In the field of human-computer interaction, the detection, extraction and classification of the electroencephalogram (EEG) spectral and spatial features are crucial towards developing a practical and robust non-invasive EEG-based brain-computer interface. Recently, due to the popularity of end-to-end deep learning, the applicability of algorithms such as convolutional neural networks (CNN) has been explored to achieve the mentioned tasks. This paper presents an improved and compact CNN algorithm for motor imagery decoding based on the adaptation of SincNet, which was initially developed for speaker recognition task from the raw audio input. Such adaptation allows for a compact end-to-end neural network with state-of-the-art (SOTA) performances and enables network interpretability for neurophysiological validation in cortical rhythms and spatial analysis. In order to validate the performance of proposed algorithms, two datasets were used; the first is the publicly available BCI Competition IV dataset 2a, which was often used as a benchmark in validating motor imagery classification algorithms, and the second is a dataset consists of primary data initially collected to study the difference between motor imagery and mental-task associated motor imagery BCI and was used to test the plausibility of the proposed algorithm in highlighting the differences in terms of cortical rhythms. Competitive decoding performance was achieved in both datasets in comparisons with SOTA CNN models, albeit with the lowest number of trainable parameters. In addition, it was shown that the proposed architecture performs a cleaner band-pass, highlighting the necessary frequency bands that were crucial and neurophysiologically plausible in solving the classification tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.