Abstract

Electroencephalography (EEG) based emotion recognition has become a hot research issue in the field of cognitive interaction and brain-computer interface (BCI). How to build a deep learning model which can fully learn frequency-spatial–temporal representation from complex emotional EEG data and has good neurological interpretability is still challenging. In this paper, a novel multiple frequency bands parallel spatial–temporal 3D deep residual learning framework (MFBPST-3D-DRLF) is proposed for EEG-based emotion recognition. Firstly, a new optimal frequency bands selection method based on group sparse regression is designed for characteristic analysis on frequency domain. Secondly, spatial–temporal 3D feature representations of multiple frequency bands are generated in the data preparation stage for fully expressing the discriminative local patterns among brain responses of different emotional states. Finally, a novel parallel 3D deep residual networks architecture is elaborately constructed to simultaneously extract high level abstract features and achieve accurate classification. Emotional EEG recognition performance of the proposed method has been evaluated on two benchmark datasets, namely SEED and SEED-IV. The proposed MFBPST-3D-DRLF achieves 96.67% and 88.21% on both datasets, outperforming several state-of-the-art algorithms. In addition, investigations on the intermediate results and model parameters reveal that neural signatures associated with different emotional states are traceable and gamma band is most suitable for EEG based emotion recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call