Abstract

Facial emotions are significant constraints that assist us to identify the intents of others. Generally, inhabitants understand the emotional condition of other people, like anger, sadness, and joy, using vocal tone and facial expressions. Here, a novel Facial Emotion Recognition system (FER) is developed that includes four major processes: (a) Face detection (b) Feature extraction (c) Optimal feature selection and (d) Classification. The input facial images are provided as input to a face detection model referred to as the viola-jones method. Then, from the detected facial images, the Local Binary Pattern (LBP), Discrete Wavelet Transform (DWT), and Gray Level Co-occurrence Matrix (GLCM) features are extracted. The length of the features is large, so there is a requirement to choose the optimal features from the image. After selecting the optimal features, it is subjected to the classification process via Neural Network (NN). As a novelty, the optimal feature selection and the weight optimization of NN are carried out via a new hybrid algorithm called Mean Fitness Oriented JA+FF position update (MF-JFF). Later, an algorithmic analysis is performed for validating the performance of the presented model. From the analysis, the accuracy obtained for the values γ attained at 0.6 was 2.2% better than the values attained when γ = 0.2, 0.4, 0.8, and 1 respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.