Abstract

AbstractHuman emotion recognition is cardinal in human–machine interaction for the machine to interact more intelligently with humans. This aids in monitoring a lack of attention while driving to augment driver safety, helps children with autism to infer other people’s emotions from their facial expressions and also helps blind people. Hence, extracting and understanding human emotion have high importance. This paper presents an analogical study of the approaches used for facial emotion recognition, specifically Haar cascade and histogram of oriented gradients (HOG). Haar cascade uses an AdaBoost algorithm that selects key features for efficient outcomes. HOG feature descriptor is conventionally used in computer vision for feature extraction, focuses more on the shape and structure of the object and provides the direction of the edge feature for vigorous outcomes. To implement this experiment, the dataset FER2013 was utilized, which consists of 35,886 images. The images are labelled into seven categories—angry, disgust, fear, happy, neutral, surprise and sad. The experiments were carried out with 28,708 training images and 7178 testing images. The obtained results proved that the HOG approach is more robust and outperformed than the existing approaches with an accuracy of 65.5%.KeywordsHOG feature descriptorHaar cascadeFeature extraction

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.