Abstract

This paper deals with computational modelling for predicting the idiosyncratic perception of others' emotions, namely how individual external observers will score the emotional states of others interacting with each other. We separately model the observer effect (or individual differences of observers), and the conversational-scene effect or the video-clip effect (how interlocutors are interacting), based on Bayes' theorem with the assumption of their conditional independence. The observer term describes the observer's cognitive tendency, including bias, in a probabilistic form, and does not include any clip information. In contrast, the clip term describes how a target clip is recognized by an unspecified observer. The perceived emotion is predicted to be the state that maximizes the conditional probability given the observer and target clip. An experiment with 100 observers and 97 clips demonstrated, in a leave-one-out cross-validation scenario, that 1) there is in fact no statistically and practically significant interaction between observer and clip, and 2) our Bayesian modelling achieves a 97 percent accuracy as a reference of test-retest reliability. Furthermore, when combined with existing observer and clip models that can handle unknown observers and clips, our model yielded an accuracy of around 50 percent in a more challenging leave-one-subject-and-clip-out cross-validation scenario.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call