Abstract

In the context of motor imagery, electroencephalography (EEG) data vary from subject to subject such that the performance of a classifier trained on data of multiple subjects from a specific domain typically degrades when applied to a different subject. While collecting enough samples from each subject would address this issue, it is often too time-consuming and impractical. To tackle this problem, we propose a novel end-to-end deep domain adaptation method to improve the classification performance on a single subject (target domain) by taking the useful information from multiple subjects (source domain) into consideration. Especially, the proposed method jointly optimizes three modules, including a feature extractor, a classifier, and a domain discriminator. The feature extractor learns the discriminative latent features by mapping the raw EEG signals into a deep representation space. A center loss is further employed to constrain an invariant feature space and reduce the intrasubject nonstationarity. Furthermore, the domain discriminator matches the feature distribution shift between source and target domains by an adversarial learning strategy. Finally, based on the consistent deep features from both domains, the classifier is able to leverage the information from the source domain and accurately predict the label in the target domain at the test time. To evaluate our method, we have conducted extensive experiments on two real public EEG data sets, data set IIa, and data set IIb of brain-computer interface (BCI) Competition IV. The experimental results validate the efficacy of our method. Therefore, our method is promising to reduce the calibration time for the use of BCI and promote the development of BCI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call