Abstract
Most works in quantifying facial deformation are based on action units (AUs) provided by the Facial Action Coding System (FACS) which describes facial expressions in terms of forty-six component movements. AU corresponds to the movements of individual facial muscles. This paper presents a rule based approach to classify the AU which depends on certain facial features. This work only covers deformation of facial features based on posed Happy and the Sad expression obtained from the BU-4DFE database. Different studies refer to different combination of AUs that form Happy and Sad expression. According to the FACS rules lined in this work, an AU has more than one facial property that need to be observed. The intensity comparison and analysis on the AUs involved in Sad and Happy expression are presented. Additionally, dynamic analysis for AUs is studied to determine the temporal segment of expressions, i.e. duration of onset, apex and offset time. Our findings show that AU15, for sad expression, and AU12, for happy expression, show facial features deformation consistency for all properties during the expression period. However for AU1 and AU4, their properties’ intensity is different during the expression period.
Highlights
Research areas such as user profiling, human psychology and augmented as well as virtual reality require efficient expression classification
Face is by nature dynamic where it demonstrates universe set of facial expressions that involve 3D space and temporal dimension (3D plus time)
This paper presents an analysis of 3D facial action units’ detection with temporal analysis and recognition of Happy and Sad expression based on the activated Action units (AUs)
Summary
Research areas such as user profiling, human psychology and augmented as well as virtual reality require efficient expression classification. Facial expression is about the deformation of facial features. It is a highly dynamical processes and looking at the sequences of faces instances rather than to still images can help to improve facial expression classification performance (Berretti et al, 2012). The Facial Action Coding System (FACS), introduced by Ekman and Friesen (1978), is the leading method to measure facial deformation in psychological research. Action units (AUs) defined by FACS represent the facial muscle activity that produces facial appearance changes (Ekman and Friesen, 1978). Face is by nature dynamic where it demonstrates universe set of facial expressions that involve 3D space and temporal dimension (3D plus time). Posed and spontaneous facial dynamics can be explicitly analyzed by detecting a sequence of temporal segments (i.e. natural, onset, apex, offset, natural) (Hess and Kleck, 1990)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Engineering & Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.