Abstract

As an important element in the human-machine interaction, the electroencephalogram (EEG)-based emotion recognition has achieved significant progress. However, one obstacle to practicality lies in the variability between subjects and sessions. Although several studies have adopted domain adaptation (DA) approaches to tackle this problem, most of them treat multiple data from different subjects and different sessions together as a single source for transfer. Since different EEG data have different marginal distributions, these approaches fail to satisfy the assumption of DA that the source has a certain marginal distribution. We therefore propose the multi-source EEG-based emotion recognition network (MEERNet), which takes both domain-invariant and domain-specific features into consideration. Firstly we assume that different EEG data share the same low-level features, and then we construct multiple branches corresponding to multiple sources to extract domain-specific features, and then DA is conducted between the target and each source. Finally, the inference is made by multiple branches. We evaluate our method on SEED and SEED-IV for recognizing three and four emotions, respectively. Experimental results show that the MEERNet outperforms the single-source methods in cross-session and cross-subject transfer scenarios with an accuracy of 86.7% and 67.1% on average, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call