Abstract

In this study, we present a multi-level fusion of deep learning technique for facial expression identification, with applications spanning the fields of cognitive science, personality development, and the detection and diagnosis of mental health disorders in humans. The suggested approach, named Deep Learning aided Hybridized Face Expression Recognition system (DLFERS), classifies human behavior from a single image frame through the use of feature extraction and a support vector machine. An information classification algorithm is incorporated into the methodology to generate a new fused image consisting of two integrated blocks of eyes and mouth, which are very sensitive to changes in human expression and relevant for interpreting emotional expressions. The Transformation of Invariant Structural Features (TISF) and the Transformation of Invariant Powerful Movement (TIPM) are utilized to extract features in the suggested method's Storage Pack of Features (SPOF). Multiple datasets are used to compare the effectiveness of different neural network algorithms for learning facial expressions. The study's major findings show that the suggested DLFERS approach achieves an overall classification accuracy of 93.96 percent and successfully displays a user's genuine emotions during common computer-based tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.