Abstract

AbstractElectroencephalogram (EEG) signals can be used to control devices using brain computer interface (BCI) systems, which offer assistance to patients with neuromuscular diseases. This article aims to determine the feasibility of using spatial covariance matrices with deep convolutional neural networks for multiclass motor imagery (MI) task classification for EEG‐based BCIs. Covariance matrices derived from multichannel EEG data are converted into color‐scaled images, which are used to train the AlexNet neural network using transfer learning. Samples of 2‐s duration are extracted every 200 ms intervals from each MI activity EEG eventually increasing the training data size sixfold and reducing the latency in real‐time implementations. Observed the performance using two popular publicly available BCI datasets for two‐class and four‐class MI classification. BCI competition IV Dataset IIa has shown average accuracy of 93.99% (k = 0.88) for two‐class and 86.70% (k = 0.82) for four‐class MI classification. BCI competition III Dataset IVa exhibited average accuracy of 97.02% (k = 0.94) for two‐class classification. Good performance was observed for training with cross‐subject data for two‐class and four‐class classification. The proposed method has shown improved accuracy for multiclass MI classification compared with existing works. The CNN is able to extract features from the covariance matrix and classify the MI data with reasonable accuracy. Since this method can handle large‐dimension data, dimension reduction, and source separation techniques are not required.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call