Abstract

Electroencephalogram (EEG)-based emotion recognition has made great progress in recent years. The current pipelines collect EEG training data in a long-time calibration session for each new subject, which is time consuming and user unfriendly. To reduce the time required for the calibration session, there have been many studies using domain adaptation (DA) approaches to transfer knowledge from existing subjects (source domain) to the new subject (target domain) for reducing the dependence on the calibration session. Existing DA methods usually require substantial unlabeled EEG data of the new subject. However, the real scenario is that there are a small number of labeled samples in the calibration session of the target. Motivated by this, we introduce a novel domain adaptation architecture based on adversarial training to learn domain-invariant feature representations across subjects. To improve the performance when there are few labeled EEG data in the calibration session, we add a soft label loss to the architecture, which can ensure that the inter-class relationships learned from the source domain are transferred to target domain. We evaluate the method on the SEED dataset, and the experimental results show that our method uses only 15 examples per trial in the calibration session to achieve an average accuracy of 87.28%, indicating the effectiveness of our framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call