Abstract

The development of a two-stage approach for appraisal inference from automatically detected Action Unit (AUs) intensities in recordings of human faces is described. AU intensity estimation is based on a hybrid approach fusing information from individually fitted mesh model of the faces and texture information. Evaluation results for two datasets and a comparison against a state of the art system are provided. In the second stage, the emotional appraisals novelty, valence and control are predicted from estimated AU intensities by linear regressions. Prediction performance is evaluated based on face recordings from a market research study, which were rated by human observers in terms of perceived appraisals. Predictions of valence and control from automatically estimated AU intensities closely match those obtained from manually coded AUs in terms of agreement with human observers, while novelty predictions lag somewhat behind. Overall, results highlight the flexibility and interpretability of a two-stage approach to emotion inference.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.