Abstract

Identifying motor and mental imagery electroencephalography (EEG) signals is imperative to realizing automated, robust brain-computer interface (BCI) systems. In the present study, we proposed a pretrained convolutional neural network (CNN)-based new automated framework feasible for robust BCI systems with small and ample samples of motor and mental imagery EEG training data. The framework is explored by investigating the implications of different limiting factors, such as learning rates and optimizers, processed versus unprocessed scalograms, and features derived from untuned pretrained models in small, medium, and large pretrained CNN models. The experiments were performed on three public datasets obtained from BCI Competition III. The datasets were denoised with multiscale principal component analysis, and time-frequency scalograms were obtained by employing a continuous wavelet transform. The scalograms were fed into several variants of ten pretrained models for feature extraction and identification of different EEG tasks. The experimental results showed that ShuffleNet yielded the maximum average classification accuracy of 99.52% using an RMSProp optimizer with a learning rate of 0.000 1. It was observed that low learning rates converge to more optimal performances compared to high learning rates. Moreover, noisy scalograms and features extracted from untuned networks resulted in slightly lower performance than denoised scalograms and tuned networks, respectively. The overall results suggest that pretrained models are robust when identifying EEG signals because of their ability to preserve the time-frequency structure of EEG signals and promising classification outcomes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.