Abstract

Different parts of our face contribute to overall facial expressions, such as anger, happiness and sadness in distinct ways. This paper investigates the degree of importance of different human face parts to the accuracy of Facial Expression Recognition (FER). In the context of machine learning, FER refers to a problem where a computer vision system is trained to automatically detect the facial expression from a presented facial image. This is a difficult image classification problem that is not yet fully solved and has received significant attention in recent years, mainly due to the increased number of possible applications in daily life. To establish the extent to which different human face parts contribute to overall facial expression, various sections have been extracted from a set of facial images and then used as inputs into three different FER systems. In terms of the recognition rates for each facial section, this result confirms that various regions of the face have different levels of importance regarding the accuracy rate achieved by an associated FER system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call