Abstract

This article aims to solve two important issues that frequently occur in existing automatic personality analysis systems: 1. Attempting to use very short video segments or even single frames, rather than long-term behaviour, to infer personality traits; 2. Lack of methods to encode person-specific facial dynamics for personality recognition. To deal with these issues, this paper first proposes a novel Rank Loss which utilizes the natural temporal evolution of facial actions, rather than personality labels, for self-supervised learning of facial dynamics. Our approach first trains a generic U-net style model that can infer general facial dynamics learned from a set of unlabelled face videos. Then, the generic model is frozen, and a set of intermediate filters are incorporated into this architecture. The self-supervised learning is then resumed with only person-specific videos. This way, the learned filters’ weights are person-specific, making them a valuable source for modeling person-specific facial dynamics. We then propose to concatenate the weights of the learned filters as a person-specific representation, which can be directly used to predict the personality traits without needing other parts of the network. We evaluate the proposed approach on both self-reported personality and apparent personality datasets. In addition to achieving promising results in the estimation of personality trait scores from videos, we show that the tasks conducted by the subject in the video matters, that fusion of a combination of tasks reaches highest accuracy, and that multi-scale dynamics are more informative than single-scale dynamics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.