Abstract

Human emotion recognition is a key technique in human–computer interaction. Traditional emotion recognition algorithms rely on external actions such as facial expression, which may fail to capture real human emotion since facial expression signals may be camouflaged. EEG signal is closely close to human emotion, which can directly reflect human emotion. In this paper, we propose to learn multi-channel features from the EEG signal for human emotion recognition, where the EEG signal is generated by sound signal stimulation. Specifically, we apply multi-channel EEG and textual feature fusion in time-domain to recognize different human emotions, where six statistical features in time-domain are fused to a feature vector for emotion classification. The textual feature extraction is based. And we conduct EEG&textual-based feature extraction from both time and frequency domain. Finally, we train SVM for human emotion recognition. Experimental on DEAP dataset show that compared with frequency domain feature-based emotion recognition algorithms, our proposed method improves recognition accuracy rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call