Abstract

Brain-computer interface (BCI) based on electroencephalogram (EEG) is a promising technology, allowing computers to estimate human intentions. Intention recognition tool such as motor imagery (MI) with high reliability is one of the major challenges in the BCI field. Recently, researchers have attempted to use transfer learning for various BCI datasets, but the studies showed low classification accuracy. This study aimed to increase the classification accuracy of the MI through sequential transfer learning for a single dataset. EEG-MI data with 9 subjects from the dataset 2a of BCI competition IV were used. EEGNet was used for MI classification. The pre-trained model was constructed by first learning whether the data were MI or not. The model was then sequentially fine-tuned through transfer learning for four MI tasks (i.e., left hand, right hand, both feet and tongue). The model was able to classify MI with 91.34% accuracy. In the meantime, the baseline model without transfer learning showed an accuracy of 61.62%, whereas the fine-tuned model presented an improved accuracy of 63.82%. Consequently, the sequential transfer learning was able to improve the performance of MI-BCI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call