Abstract

Emotion recognition has attracted great interest. Numerous emotion recognition approaches have been proposed, most of which focus on visual, acoustic or psychophysiological information individually. Although more recent research has considered multimodal approaches, individual modalities are often combined only by simple fusion or are directly fused with deep learning networks at the feature level. In this paper, we propose an approach to training several specialist networks that employs deep learning techniques to fuse the features of individual modalities. This approach includes a multimodal deep belief network (MDBN), which optimizes and fuses unified psychophysiological features derived from the features of multiple psychophysiological signals, a bimodal deep belief network (BDBN) that focuses on representative visual features among the features of a video stream, and another BDBN that focuses on the high multimodal features in the unified features obtained from two modalities. Experiments are conducted on the BioVid Emo DB database and 80.89% accuracy is achieved, which outperforms the state-of-the-art approaches. The results demonstrate that the proposed approach can solve the problems of feature redundancy and lack of key features caused by multimodal fusion.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.