Abstract
Video Affective Content Analysis is an active research area in computer vision. Live Streaming video has become one of the modes of communication in the recent decade. Hence video affect content analysis plays a vital role. Existing works on video affective content analysis are more focused on predicting the current state of the users using either of the visual or the acoustic features. In this paper, we propose a novel hybrid SVM-RBM classifier which recognizes the emotion for both live streaming video and stored video data using audio-visual features; thus recognizes the users' mood based on categorical emotion descriptors. The proposed method is experimented for human emotions recognition for live streaming data using the devices such as Microsoft Kinect and Web Cam. Further we tested and validated using standard datasets like HUMANE and SAVEE. Classification of emotion is performed for both acoustic and visual data using Restricted Boltzmann Machine (RBM) and Support Vector Machine (SVM). It is observed that SVM-RBM classifier outperforms RBM and SVM for annotated datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.