Abstract

Human emotion recognition is a key technique in human–computer interaction. Traditional emotion recognition algorithms rely on external actions such as facial expression, which may fail to capture real human emotion since facial expression signals may be camouflaged. EEG signal is closely close to human emotion, which can directly reflect human emotion. In this paper, we propose to learn multi-channel features from the EEG signal for human emotion recognition, where the EEG signal is generated by sound signal stimulation. Specifically, we apply multi-channel EEG and textual feature fusion in time-domain to recognize different human emotions, where six statistical features in time-domain are fused to a feature vector for emotion classification. The textual feature extraction is based. And we conduct EEG&textual-based feature extraction from both time and frequency domain. Finally, we train SVM for human emotion recognition. Experimental on DEAP dataset show that compared with frequency domain feature-based emotion recognition algorithms, our proposed method improves recognition accuracy rate.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.