Abstract

Face expression can be used to identify human emotions, but it is easy to misjudge when hidden artificially. In addition, the sentiment recognition of a single mode often results in low recognition rate due to the characteristics of the single mode itself. In order to solve the mentioned problems, the spatio-temporal neural network and the separable residual network proposed by fusion can realize the emotion recognition of EEG and face. The average recognition rates of EEG and face data sets are 78.14% and 70.89%, respectively, and the recognition rates of decision fusion on DEAP data sets are 84.53%. Experimental results show that compared with the single mode, the proposed two-mode emotion recognition architecture has better performance, and can well integrate the emotional information contained in human face visual signals and EEG signals.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.