Abstract

Emotion recognition based on electroencephalogram (EEG) has received extensive attention due to its advantages of being objective and not being controlled by subjective consciousness. However, inter-individual differences lead to insufficient generalization of the model on cross-subject recognition tasks. To solve this problem, a cross-subject emotional EEG classification algorithm based on multi-source domain selection and subdomain adaptation is proposed in this paper. We firstly design a multi-representation variational autoencoder (MR-VAE) to automatically extract emotion related features from multi-channel EEG to obtain a consistent EEG representation with as little prior knowledge as possible. Then, a multi-source domain selection algorithm is proposed to select the existing subjects’ EEG data that is closest to the target data distribution in the global distribution and sub-domain distribution, thereby improving the performance of the transfer learning model on the target subject. In this paper, we use a small amount of annotated target data to achieve knowledge transfer and improve the classification accuracy of the model on the target subject as much as possible, which has certain significance in clinical research. The proposed method achieves an average classification accuracy of 92.83% and 79.30% in our experiment on two public datasets SEED and SEED-IV, respectively, which are 26.37% and 22.80% higher than the baseline non-transfer learning method, respectively. Furthermore, we validate the proposed method on other two commonly used public datasets DEAP and DREAMER, which establish SOTA results on the binary classification task of the DEAP dataset. It also achieves comparable accuracy to several transfer learning based methods on the DREAMER dataset. The detailed recognition results on DEAP and DREAMER are in Appendix.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call