Abstract

In recent years, automatic emotion recognition renders human–computer interaction systems intelligent and friendly. Emotion recognition based on electroencephalogram (EEG) has received widespread attention and many research results have emerged, but how to establish an integrated temporal and spatial feature fusion and classification method with improved convolutional neural networks (CNNs) and how to utilize the spatial information of different electrode channels to improve the accuracy of emotion recognition in the deep learning are two important challenges. This paper proposes an emotion recognition method based on three-dimensional (3D) feature maps and CNNs. First, EEG data are calibrated with 3 s baseline data and divided into segments with 6 s time window, and then the wavelet energy ratio, wavelet entropy of five rhythms, and approximate entropy are extracted from each segment. Second, the extracted features are arranged according to EEG channel mapping positions, and then each segment is converted into a 3D feature map, which is used to simulate the relative position of electrode channels on the scalp and provides spatial information for emotion recognition. Finally, a CNN framework is designed to learn local connections among electrode channels from 3D feature maps and to improve the accuracy of emotion recognition. The experiments on data set for emotion analysis using physiological signals data set were conducted and the average classification accuracy of 93.61% and 94.04% for valence and arousal was attained in subject-dependent experiments while 83.83% and 84.53% in subject-independent experiments. The experimental results demonstrate that the proposed method has better classification accuracy than the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call