Abstract

As an essential element for the diagnosis and rehabilitation of psychiatric disorders, the electroencephalogram (EEG) based emotion recognition has achieved significant progress due to its high precision and reliability. However, one obstacle to practicality lies in the variability between subjects and sessions. Although several studies have adopted domain adaptation (DA) approaches to tackle this problem, most of them treat multiple EEG data from different subjects and sessions together as a single source domain for transfer, which either fails to satisfy the assumption of domain adaptation that the source has a certain marginal distribution, or increases the difficulty of adaptation. We therefore propose the multi-source marginal distribution adaptation (MS-MDA) for EEG emotion recognition, which takes both domain-invariant and domain-specific features into consideration. First, we assume that different EEG data share the same low-level features, then we construct independent branches for multiple EEG data source domains to adopt one-to-one domain adaptation and extract domain-specific features. Finally, the inference is made by multiple branches. We evaluate our method on SEED and SEED-IV for recognizing three and four emotions, respectively. Experimental results show that the MS-MDA outperforms the comparison methods and state-of-the-art models in cross-session and cross-subject transfer scenarios in our settings. Codes at https://github.com/VoiceBeer/MS-MDA.

Highlights

  • Emotion as physiological information, unlike widely studied logical intelligence, is central to the quality and range of daily human communications (Dolan, 2002; Tyng et al, 2017)

  • It should be noticed that since many previous works do not make their codes public available, we customize the comparison methods that are described in their papers with our settings, and including some typical deep learning domain adaptation models for better comparison

  • We propose multi-source marginal distribution adaptation (MS-MDA), an EEG-based emotion recognition domain adaptation method, which is applicable to multiple source domain situations

Read more

Summary

Introduction

Unlike widely studied logical intelligence, is central to the quality and range of daily human communications (Dolan, 2002; Tyng et al, 2017). Affective brain-computer interfaces (aBCIs), acting as a bridge between the emotions extracted from the brain and the computer, which has shown potential for rehabilitation and communication (Birbaumer, 2006; Frisoli et al, 2012; Lee et al, 2019). Bucks and Radford (2004) investigates the identification of non-verbal communicative signals of emotion in people that are suffering from Alzheimer’s disease. Hosseinifard et al (2013) investigates the non-linear features from EEG signals for classifying depression patients and normal subjects.

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call