Abstract

Accurate recognition and understating of human emotions is an essential skill that can improve the collaboration between humans and machines. In this vein, electroencephalogram (EEG)-based emotion recognition is considered an active research field with challenging issues regarding the analyses of the nonstationary EEG signals and the extraction of salient features that can be used to achieve accurate emotion recognition. In this paper, an EEG-based emotion recognition approach with a novel time-frequency feature extraction technique is presented. In particular, a quadratic time-frequency distribution (QTFD) is employed to construct a high resolution time-frequency representation of the EEG signals and capture the spectral variations of the EEG signals over time. To reduce the dimensionality of the constructed QTFD-based representation, a set of 13 time- and frequency-domain features is extended to the joint time-frequency-domain and employed to quantify the QTFD-based time-frequency representation of the EEG signals. Moreover, to describe different emotion classes, we have utilized the 2D arousal-valence plane to develop four emotion labeling schemes of the EEG signals, such that each emotion labeling scheme defines a set of emotion classes. The extracted time-frequency features are used to construct a set of subject-specific support vector machine classifiers to classify the EEG signals of each subject into the different emotion classes that are defined using each of the four emotion labeling schemes. The performance of the proposed approach is evaluated using a publicly available EEG dataset, namely the DEAPdataset. Moreover, we design three performance evaluation analyses, namely the channel-based analysis, feature-based analysis and neutral class exclusion analysis, to quantify the effects of utilizing different groups of EEG channels that cover various regions in the brain, reducing the dimensionality of the extracted time-frequency features and excluding the EEG signals that correspond to the neutral class, on the capability of the proposed approach to discriminate between different emotion classes. The results reported in the current study demonstrate the efficacy of the proposed QTFD-based approach in recognizing different emotion classes. In particular, the average classification accuracies obtained in differentiating between the various emotion classes defined using each of the four emotion labeling schemes are within the range of –. Moreover, the emotion classification accuracies achieved by our proposed approach are higher than the results reported in several existing state-of-the-art EEG-based emotion recognition studies.

Highlights

  • Emotions comprise complex mental activities that can influence the physical and psychological behavior of humans during social interactions and decision-making processes

  • The three performance evaluation analyses are: the channel-based analysis, feature-based analysis and neutral class exclusion analysis. These performance evaluation analyses are designed to quantify the effect of utilizing different groups of EEG channels that cover various regions in the brain, reducing the dimensionality of the extracted time-frequency features, and excluding the EEG signals that correspond to the neutral class, which represents the no-emotion state, to improve the capability of the proposed approach to discriminate between different emotion classes

  • Using the 1D-2CLS and Configuration 1 (C1), the highest average acc and F1 values achieved in discriminating between the high arousal (HA) and low arousal (LA) classes were 75.9% and 66.7%, respectively, which were obtained using the time-frequency features extracted from the symmetrical pair of EEG channels O1-O2.the highest average acc and F1 values achieved in discriminating between the high valence (HV) and low valence (LV) classes were 73.9% and 69.7%, respectively, which were obtained using the time-frequency features extracted from the symmetrical pair of EEG channels AF3 − AF4

Read more

Summary

Introduction

Emotions comprise complex mental activities that can influence the physical and psychological behavior of humans during social interactions and decision-making processes. Identifying human emotional states is considered a vital capability towards achieving intelligent and effective social communications in several domains. In the medical domain, discerning the patient’s emotional state can provide caregivers with an indicator about the patient’s mental and physical status [1,2] and the progress of the recovery process [3]. In the education domain, characterizing the students’ emotional state in terms of the level of interest and active participation in the learning process is crucial to achieve effective knowledge transfer and development systems [4]. The literature reveals that researchers have developed various approaches for emotion recognition based on analyzing facial expressions [6,7,8,9], voice [10,11], combined visual and textual modalities [12]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call