Abstract

Transfer learning (TL) has demonstrated its efficacy in addressing the cross-subject domain adaptation challenges in affective brain-computer interfaces (aBCI). However, previous TL methods usually use a stationary distance, such as Euclidean distance, to quantify the distribution dissimilarity between two domains, overlooking the inherent links among similar samples, potentially leading to suboptimal feature mapping. In this study, we introduced a novel algorithm called multi-source manifold metric transfer learning (MSMMTL) to enhance the efficacy of conventional TL. Specifically, we first selected the source domain based on Mahalanobis distance to enhance the quality of the source domains and then used manifold feature mapping approach to map the source and target domains on the Grassmann manifold to mitigate data drift between domains. In this newly established shared space, we optimized the Mahalanobis metric by maximizing the inter-class distances while minimizing the intra-class distances in the target domain. Recognizing that significant distribution discrepancies might persist across different domains even on the manifold, to ensure similar distributions between the source and target domains, we further imposed constraints on both domains under the Mahalanobis metric. This approach aims to reduce distributional disparities and enhance the electroencephalogram (EEG) emotion recognition performance. In cross-subject experiments, the MSMMTL model exhibits average classification accuracies of 88.83 % and 65.04 % for SEED and DEAP, respectively, underscoring the superiority of our proposed MSMMTL over other state-of-the-art methods. MSMMTL can effectively solve the problem of individual differences in EEG-based affective computing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call