Abstract

Over the last few years, unsupervised domain adaptation (UDA) based on deep learning has emerged as a solution to build cross-subject emotion recognition models from Electroencephalogram (EEG) signals, aligning the subject distributions within a latent feature space. However, most reported works have a common intrinsic limitation: the subject distribution alignment is coarse-grained, but not all of the feature space is shared between subjects. In this paper, we propose a robust unified domain adaptation framework, named Multi-source Feature Alignment and Label Rectification (MFA-LR), which performs a fine-grained domain alignment at subject and class levels, while inter-class separation and robustness against input perturbations are encouraged in coarse grain. As a complementary step, a pseudo-labeling correction procedure is used to rectify mislabeled target samples. Our proposal was assessed over two public datasets, SEED and SEED-IV, on each of the three available sessions, using leave-one-subject-out cross-validation. Experimental results show an accuracy performance of up to 89.11 ± 07.72% and 74.99 ± 12.10% for the best session on SEED and SEED-IV, as well as an average accuracy of 85.27% and 69.58% on all three sessions, outperforming state-of-the-art results.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.