Abstract
Physiological automatic personality recognition has been largely developed to model an individuals personality trait from a variety of signals. However, few studies have tackled the problems of integration methodology from multiple observations into a single personality prediction. In this study, we focus on finding a novel learning architecture to model the personality trait under a Many-to-One scenario. We propose to integrate not only the information on the user but also consider the effect of the affective multimedia stimulus. Specifically, we present a novel Acoustic-Visual Guided Attentive Graph Convolutional Network for enhanced personality recognition. The emotional multimedia content guides the formation of the physiological responses into a graph-like structure to integrate latent inter-correlation among all responses toward affective multimedia. Then these graphs would be further processed by the Graph Convolutional Network (GCN) to jointly model instances and inter-correlation levels of the subjects responses. We show that our model outperforms the current state of the art on two large public corpora for personality recognition. Further analysis reveals that there indeed exists a multimedia preference for inferring personality from physiology, and several frequency-domain descriptors in ECG and the tonic component in EDA are shown to be robust for automatic personality recognition.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.