Abstract
Abstract Electroencephalography (EEG) motor imagery (MI) signals have recently gained a lot of attention as these signals encode a person’s intent of performing an action. Researchers have used MI signals to help disabled persons, control devices such as wheelchairs and even for autonomous driving. Hence decoding these signals accurately is important for a Brain–Computer interface (BCI) system. But EEG decoding is a challenging task because of its complexity, dynamic nature and low signal to noise ratio. Convolution neural network (CNN) has shown that it can extract spatial and temporal features from EEG, but in order to learn the dynamic correlations present in MI signals, we need improved CNN models. CNN can extract good features with both shallow and deep models pointing to the fact that, at different levels relevant features can be extracted. Fusion of multiple CNN models has not been experimented for EEG data. In this work, we propose a multi-layer CNNs method for fusing CNNs with different characteristics and architectures to improve EEG MI classification accuracy. Our method utilizes different convolutional features to capture spatial and temporal features from raw EEG data. We demonstrate that our novel MCNN and CCNN fusion methods outperforms all the state-of-the-art machine learning and deep learning techniques for EEG classification. We have performed various experiments to evaluate the performance of the proposed CNN fusion method on public datasets. The proposed MCNN method achieves 75.7% and 95.4% on the BCI Competition IV-2a dataset and the High Gamma Dataset respectively. The proposed CCNN method based on autoencoder cross-encoding achieves more than 10% improvement for cross-subject EEG classification.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.