Abstract

AbstractAn efficient, global and local image-processing based extraction and tracking of intransient facial features and automatic recognition of facial expressions from both static and dynamic 2D image/video sequences is presented. Expression classification is based on Facial Action Coding System (FACS) a lower and upper face action units (AUs), and discrimination is performed using Probabilistic Neural Networks (PNN) and a Rule-Based system. For the upper face detection and tracking, we use systems based on a novel two-step active contour tracking system while for the upper face, cross-correlation based tracking system is used to detect and track of Facial Feature Points (FFPs). Extracted FFPs are used to extract some geometric features to form a feature vector which is used to classify input image or image sequences into AUs and basic emotions. Experimental results show robust detection and tracking and reasonable classification where an average recognition rate is 96.11% for six basic emotions in facial image sequences and 94% for five basic emotions in static face images.KeywordsActive contoursAction UnitsFacial ExpressionsProbabilistic Neural Networks

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.