Abstract

Emotion recognition based on electroencephalography (EEG) is of great importance in the field of human–computer interaction. In recent years, deep learning methods, especially convolutional neural networks (CNNs), have shown great potentials in emotion recognition. However, full-channel EEG signals sometimes lead to redundant data and hardware complexity. In addition, the disadvantage of CNNs lies in its locality nature. It ignores the relationship among different channels in emotion recognition. As for channel selection, the time-domain and frequency-domain features are extracted. Then the ReliefF algorithm is used to select the initial channels with higher contribution to emotion recognition task based on these extracted features. Since the ReliefF algorithm only considers channels with high contribution, the redundancy among channels may be ignored. And thus, the max-relevance and min-redundancy (mRMR) algorithm is employed to reduce the redundancy and obtain the final channels. This algorithm is named ReliefF-mRMR. Inspired by EEGNet and capsule network (CapsNet), we combine the advantages of them and propose Caps-EEGNet. It can fully utilize the frequency and spatial information and consider the relationship among different channels. The appropriate channels selected by ReliefF-mRMR are mostly located at the frontal area, which is consistent with many other findings in relevant studies. The Caps-EEGNet method achieves average accuracy of 96.67%, 96.75% and 96.64% on valence, arousal and dominance dimensions of the DEAP dataset. And the performance on the DREAMER dataset is 91.12%, 92.6% and 93.74% accuracy on valence, arousal and dominance dimensions, respectively. Experimental results also outperform other state-of-the-art methods. Experimental results of Caps-EEGNet with 8 appropriate channels and all channels show that there is slight difference in accuracy while the computation with 8 channels is faster. In addition, the 8 appropriate channels selected by ReliefF-mRMR algorithm have a better performance than the 8 channels selected randomly. These findings are valuable for practical EEG‐based emotion recognition systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call