Abstract

Emotion recognition is a significant research filed of pattern recognition and artificial intelligence. The Multimodal Emotion Recognition Challenge (MEC) is a part of the 2016 Chinese Conference on Pattern Recognition (CCPR). The goal of this competition is to compare multimedia processing and machine learning methods for multimodal emotion recognition. The challenge also aims to provide a common benchmark data set, to bring together the audio and video emotion recognition communities, and to promote the research in multimodal emotion recognition. The data used in this challenge is the Chinese Natural Audio-Visual Emotion Database (CHEAVD), which is selected from Chinese movies and TV programs. The discrete emotion labels are annotated by four experienced assistants. Three sub-challenges are defined: audio, video and multimodal emotion recognition. This paper introduces the baseline audio, visual features, and the recognition results by Random Forests.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.