Abstract

Background: Automatic human emotion recognition system is an active area of research due to its wide applications in the field of Human Computer Interaction(HCI) systems, driver fatigue monitoring systems, surveillance systems, human assistance systems, smile detectors etc. Objective: The study presents a fuzzy based approach to extract facial features from input image and builds different classification models to classify the image into two emotion classes i.e. happy and neutral. The system has potential implications in the field of smile detection systems, customer experience analysis and patient monitoring systems. Methods: The proposed system determines the dimensional attributes (l-attribute and w-attribute) of mouth region extracted from the facial image using viola-jones algortithm. The feature set is generated by using a total of 136 images from JAFFE, NimStim and MUG dataset. The differentiating power of the attribures is then evaluated using five different classification models. Results: The accuracy, precision and recall is determined for each classification model. The results show good accuracy of 70% for the grayscale JAFFE and NimStim databases and 95% for the coloured MUG database. Conclusion: The mouth features calculated in the study are based on the geometric coordinates which eliminates the possibility of false distance measurements due to presence of noise or shadows.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call