Abstract

To overcome the deficiency of high complexity performance in video emotion recognition, we propose a novel Local Binary Pattern Moment method based on Temporal-Spatial for feature extraction of dual-modality emotion recognition. Firstly, preprocessing is used to obtain the facial expression and posture sequences. Secondly, TSLBPM is utilized to extract the features of the facial expression and posture sequences. The minimum Euclidean distances are selected by calculating the features of the testing sequences and the marked emotion training sets, and they are used as independent evidence to build the Basic Probability Assignment (BPA). Finally, according to the rules of Dempster-Shafer evidence theory, the expression recognition result is obtained by fused BPA. The experimental results on the FABO expression and posture dual-modality emotion database show the Temporal-Spatial Local Binary Pattern Moment feature of the video image can be extracted quickly and the video emotional state can be effectively identified. What’s more, compared with other methods , the experiments have verified the superiority of fusion.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.