Abstract

In this article, we introduce <i>Mementos</i> the first multimodal corpus for computational modelling of affect and memory processing in response to video content. It was collected online via crowdsourcing and captures 1995 individual responses collected from 297 unique viewers responding to 42 different segments of music videos. Apart from webcam recordings of their upper-body behaviour (totaling 2012 minutes) and self-reports of their emotional experience, it contains detailed descriptions of the occurrence and content of 989 personal memories triggered by the video content. Finally, the dataset includes self-report measures related to individual differences in participants' background and situation (<i>Demographics</i>, <i>Personality</i>, and <i>Mood</i>), thereby facilitating the exploration of important contextual factors in research using the dataset. We describe enumerate 1) the construction and contents of the corpus itself, 2) analyse the <i>validity</i> of its content by investigating biases and consistency with existing research on affect and memory processing, review previously published work that demonstrates the <i>usefulness</i> of the multimodal data in the corpus for research on automated detection and prediction tasks, and 3) provide suggestions for how the dataset can be used in future research on modeling <i>Video-Induced Emotions</i>, <i>Memory-Associated Affect</i>, and <i>Memory Evocation</i>.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call