Abstract

The reliable way to discern human emotions in various circumstances has been proven to be through facial expressions. Facial expression recognition (FER) has emerged as a research topic to identify various essential emotions in the present exponential rise in research for emotion detection. Happiness is one of these basic emotions everyone may experience, and facial expressions are better at detecting it than other emotion-measuring methods. Most techniques have been designed to recognize various emotions to achieve the highest level of general precision. Maximizing the recognition accuracy for a particular emotion is challenging for researchers. Some techniques exist to identify a single happy mood recorded in unrestricted video. Still, they are all limited by the processing of extreme head posture fluctuations that they need to consider, and their accuracy still needs to be improved. This research proposes a novel hybrid facial emotion recognition using unconstraint video to improve accuracy. Here, a Deep Belief Network (DBN) with long short-term memory (LSTM) is employed to extract dynamic data from the video frames. The experiments conducted uses decision-level and feature-level fusion techniques are applied unconstrained video dataset. The outcomes show that the proposed hybrid approach may be more precise than some existing facial expression models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.