Recent researches on emotion recognition suggests that domain adaptation, a form of transfer learning, has the capability to solve the cross-subject problem in Affective brain-computer interface (aBCI) field. However, traditional domain adaptation methods perform single to single domain transfer or simply merge different source domains into a larger domain to realize the transfer of knowledge, resulting in negative transfer. In this study, a multi-source transfer learning framework was proposed to promote the performance of multi-source electroencephalogram (EEG) emotion recognition. The method first used the data distribution similarity ranking (DDSA) method to select the appropriate source domain for each target domain off-line, and reduced data drift between domains through manifold feature mapping on Grassmann manifold. Meanwhile, the minimum redundancy maximum correlation algorithm (mRMR) was employed to select more representative manifold features and minimized the conditional distribution and marginal distribution of the manifold features, and then learned the domain-invariant classifier by summarizing structural risk minimization (SRM). Finally, the weighted fusion criterion was applied to further improve recognition performance. We compared our method with several state-of-the-art domain adaptation techniques using the SEED and DEAP dataset. Results showed that, compared with the conventional MEDA algorithm, the recognition accuracy of our proposed algorithm on SEED and DEAP dataset were improved by 6.74% and 5.34%, respectively. Besides, compared with TCA, JDA, and other state-of-the-art algorithms, the performance of our proposed method was also improved with the best average accuracy of 86.59% on SEED and 64.40% on DEAP. Our results demonstrated that the proposed multi-source transfer learning framework is more effective and feasible than other state-of-the-art methods in recognizing different emotions by solving the cross-subject problem.
Read full abstract