Abstract

Due to the rapid development of human–computer interaction, affective computing has attracted more and more attention in recent years. In emotion recognition, Electroencephalogram (EEG) signals are easier to be recorded than other physiological experiments and are not easily camouflaged. Because of the high dimensional nature of EEG data and the diversity of human emotions, it is difficult to extract effective EEG features and recognize the emotion patterns. This paper proposes a multi-feature deep forest (MFDF) model to identify human emotions. The EEG signals are firstly divided into several EEG frequency bands and then extract the power spectral density (PSD) and differential entropy (DE) from each frequency band and the original signal as features. A five-class emotion model is used to mark five emotions, including neutral, angry, sad, happy, and pleasant. With either original features or dimension reduced features as input, the deep forest is constructed to classify the five emotions. These experiments are conducted on a public dataset for emotion analysis using physiological signals (DEAP). The experimental results are compared with traditional classifiers, including K Nearest Neighbors (KNN), Random Forest (RF), and Support Vector Machine (SVM). The MFDF achieves the average recognition accuracy of 71.05%, which is 3.40%, 8.54%, and 19.53% higher than RF, KNN, and SVM, respectively. Besides, the accuracies with the input of features after dimension reduction and raw EEG signal are only 51.30 and 26.71%, respectively. The result of this study shows that the method can effectively contribute to EEG-based emotion classification tasks.

Highlights

  • Emotions occupy a very important position in human communication and personal decision-making

  • Experimental results based on three hyper-spectral images (HSIs) demonstrate that the proposed method achieves the state-of-the-art classification performance

  • power spectral density (PSD) and differential entropy (DE) are two classic features set for EEG-based emotion recognition (Naderi and Mahdavi-Nasab, 2010; Duan et al, 2013; Shi et al, 2013), which are selected as the input feature for the proposed multi-feature deep forest (MFDF) method

Read more

Summary

Introduction

Emotions occupy a very important position in human communication and personal decision-making. Human–computer interaction (HCI) for emotion recognition is carried out by using voice and facial expression signals (Fan et al, 2003; Sidney et al, 2005; Zeng et al, 2008). These external signals have a certain degree of camouflage. Using voice and facial expression signals as the basis for emotion recognition is not convincing. EEG physiological signals are directly produced by the central nervous system of human body, and the central nervous system is closely related to human emotions. The dataset contains 32 channel EEG signals and eight-channel peripheral physiological signals recorded from 32 subjects watching 40 music videos.

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call