Abstract

In affective computing, recognizing emotions using an EEG signal is challenging. A three-dimensional model is used to identify the emotion. We have used real-time data in this study. The videos (one minute) were played for the subject as stimuli, and the EEG signal was recorded using an EEG recorder. The significant features for emotion are identified by comparing different EEG features. From the EEG signal, four types of features are extracted using methods for feature extraction. The best features among them are identified. PCA is employed to select the important features from the extracted dataset, and the selected features are given to three deep learning classifiers: Gated recurrent unit (GRU), convolutional neural network (CNN), and deep emotion recognizer (DER). The performance of the deep learning proposed system is 81.23%, 80.41%, and 81.75% for arousal, valence, and dominance, respectively. Our findings show that EEG signals’ time-domain statistical features can effectively distinguish between different emotional states. The proposed model accuracy is 81% and the model loss is 1.2.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call