Abstract

Recognizing cross-subject emotions based on brain imaging data, e.g., EEG, has always been difficult due to the poor generalizability of features across subjects. Thus, systematically exploring the ability of different EEG features to identify emotional information across subjects is crucial. Prior related work has explored this question based only on one or two kinds of features, and different findings and conclusions have been presented. In this work, we aim at a more comprehensive investigation on this question with a wider range of feature types, including 18 kinds of linear and non-linear EEG features. The effectiveness of these features was examined on two publicly accessible datasets, namely, the dataset for emotion analysis using physiological signals (DEAP) and the SJTU emotion EEG dataset (SEED). We adopted the support vector machine (SVM) approach and the “leave-one-subject-out” verification strategy to evaluate recognition performance. Using automatic feature selection methods, the highest mean recognition accuracy of 59.06% (AUC = 0.605) on the DEAP dataset and of 83.33% (AUC = 0.904) on the SEED dataset were reached. Furthermore, using manually operated feature selection on the SEED dataset, we explored the importance of different EEG features in cross-subject emotion recognition from multiple perspectives, including different channels, brain regions, rhythms, and feature types. For example, we found that the Hjorth parameter of mobility in the beta rhythm achieved the best mean recognition accuracy compared to the other features. Through a pilot correlation analysis, we further examined the highly correlated features, for a better understanding of the implications hidden in those features that allow for differentiating cross-subject emotions. Various remarkable observations have been made. The results of this paper validate the possibility of exploring robust EEG features in cross-subject emotion recognition.

Highlights

  • Emotion recognition as an emerging research direction has attracted increasing attention from different fields and is promising for many application domains

  • We first try to determine the upper bound of the performance of the proposed features

  • The upper bound of the performance of the proposed features can be verified using the previous automatic feature selection methods

Read more

Summary

INTRODUCTION

Emotion recognition as an emerging research direction has attracted increasing attention from different fields and is promising for many application domains. Kim (2007) studied the bimodal data fusion method and utilized LDA to classify emotions Using this method, the best obtained recognition accuracy on all three subjects’ data was 55%, which was far lower than the best result of 92% obtained on a single subject’s data. There has been some related work that attempted to tackle this problem and to identify robust EEG features in cross-subject emotion recognition. Soleymani et al (2012) performed cross-subject emotion recognition tasks on EEG and eye gaze data. The authors found that EEGs in lateral temporal areas were activated more for positive emotions than negative emotions in the beta and gamma bands and that subject-independent EEG features stemmed mostly from those brain areas and frequency bands. In the aforementioned existing work, only few kinds of features were examined and why those robust features contribute to cross-subject emotion recognition was not studied. The support vector machine (SVM), a state of the art classifier, was used in all the experiments

MATERIALS AND METHODS
Experimental Data
Data Preprocessing
Feature Extraction
Automatic Feature Selection
Manually Operated Feature Selection
Overall Evaluation
Evaluation From Different Perspectives
Correlation Analysis
DISCUSSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call