Abstract

Background: Humans can deliver many emotions during a conversation. Facial expressions show information about emotions.
 Objectives: This study proposed a Machine Learning (ML) approach based on a statistical analysis of emotion recognition using facial expression through a digital image.
 Methodology: A total of 600 digital image datasets divided into 6 classes (Anger, Happy, Fear, Surprise, Sad, and Normal) was collected from publicly available Taiwan Facial Expression Images Database. In the first step, all images are converted into a gray level format and 4 Regions of Interest (ROIs) are created on each image, so the total image dataset gets divided in 2400 (600 x 4) sub-images. In the second step, 3 types of statistical features named texture, histogram, and binary feature are extracted from each ROIs. The third step is a statistical feature optimization using the best-first search algorithm. Lastly, an optimized statistical feature dataset is deployed on various ML classifiers.
 Results: The analysis part was divided into two phases: firstly boosting algorithms-based ML classifiers (named as LogitBoost, AdaboostM1, and Stacking) which obtained 94.11%, 92.15%, and 89.21% accuracy, respectively. Secondly, decision tree algorithms named J48, Random Forest, and Random Committee were obtained with 97.05%, 93.14%, and 92.15% accuracy, respectively.
 Conclusion: It was observed that decision tree based J48 classifiers gave 97.05% classification accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.