Abstract

The reliable way to discern human emotions in various circumstances has been proven to be through facial expressions. Facial expression recognition (FER) has emerged as a research topic to identify various essential emotions in the present exponential rise in research for emotion detection. Happiness is one of these basic emotions everyone may experience, and facial expressions are better at detecting it than other emotion-measuring methods. Most techniques have been designed to recognize various emotions to achieve the highest level of general precision. Maximizing the recognition accuracy for a particular emotion is challenging for researchers. Some techniques exist to identify a single happy mood recorded in unrestricted video. Still, they are all limited by the processing of extreme head posture fluctuations that they need to consider, and their accuracy still needs to be improved. This research proposes a novel hybrid facial emotion recognition using unconstraint video to improve accuracy. Here, a Deep Belief Network (DBN) with long short-term memory (LSTM) is employed to extract dynamic data from the video frames. The experiments conducted uses decision-level and feature-level fusion techniques are applied unconstrained video dataset. The outcomes show that the proposed hybrid approach may be more precise than some existing facial expression models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call