Emotion constitutes a higher-order cognitive process, moreover, physiological signals can offer a more objective and realistically reflection of human emotional state. Electroencephalograph (EEG) signals, controlled by the central nervous system, are highly sensitive to emotional state fluctuations. Current research primarily focuses on recognizing emotions through feature extraction from multi-channel EEG signals. However, the acquisition of multi-channel EEG signals is hindered by difficulties acquiring from hair occlusion. The objective of this study is to construct a real-time physiological-emotion recognition model based on 2-channel EEG signals from the forehead without hair occlusion combined with 3-channel peripheral physiological (PPS) signals, employing signal preprocessing, feature extraction, feature normalization, and the ensemble learning algorithm Light Gradient Boosting Machine (LightGBM). In this paper, thirty participants sequentially viewed fourteen virtual reality (VR) scenes, rated for valence and arousal using the Self-Assessment Manikin (SAM) scale after each scene. The 2-channel EEG signals and 3-channel PPS signals were recorded synchronously from forehead by VR Head-Mounted Displays (HMD) Bio Pad developed in our research center. Our proposed model achieved median accuracy of 89.68% for valence, 90.11% for arousal in binary classification, and 84.93% for four classification of emotion recognition based on the collected samples. Furthermore, verification result of the proposed method on a public database (DEAP) achieved accuracy of 84.03%, 84.37%, and 72.23% on valence, arousal, and four classification, respectively. The proposed physiological-emotion recognition model, utilizing a limited number of multimodal physiological signals channels from wearable devices, demonstrates higher accuracy compared with the results in some literature, suggesting its potential for real-time implementation due to limited signal channels and low runtime.
Read full abstract