Abstract

From a computational viewpoint, emotions continue to be intriguingly hard to understand. In research, a direct and real-time inspection in realistic settings is not possible. Discrete, indirect, post-hoc recordings are therefore the norm. As a result, proper emotion assessment remains a problematic issue. The Continuously Annotated Signals of Emotion (CASE) dataset provides a solution as it focusses on real-time continuous annotation of emotions, as experienced by the participants, while watching various videos. For this purpose, a novel, intuitive joystick-based annotation interface was developed, that allowed for simultaneous reporting of valence and arousal, that are instead often annotated independently. In parallel, eight high quality, synchronized physiological recordings (1000 Hz, 16-bit ADC) were obtained from ECG, BVP, EMG (3x), GSR (or EDA), respiration and skin temperature sensors. The dataset consists of the physiological and annotation data from 30 participants, 15 male and 15 female, who watched several validated video-stimuli. The validity of the emotion induction, as exemplified by the annotation and physiological data, is also presented.

Highlights

  • Background & SummaryThe field of Artificial Intelligence (AI) has rapidly advanced in the last decade and is on the cusp of transforming several aspects of our daily existence

  • Continuous annotation based on dimensional models is preferred and several annotation tools for undertaking the same have been developed[14,15,16]

  • Upon registering for the experiment, an email containing general information and instructions for the experiment was sent to the participants

Read more

Summary

Background & Summary

The field of Artificial Intelligence (AI) has rapidly advanced in the last decade and is on the cusp of transforming several aspects of our daily existence. This makes undertaking these steps a fairly time-consuming and expensive exercise To address this issue, several (uni- and multi-modal) datasets that incorporate continuous annotation have been developed. In all datasets except SEWA, that uses a joystick, mouse-based annotation tools were used In recent years, both these aspects have been reported to have major drawbacks[22,23,24]. Both these aspects have been reported to have major drawbacks[22,23,24] These being, that separate annotation of valence and arousal does not account for the inherent relationship between them[23,25], and that mouse-based annotation tools are generally less ergonomic than joysticks[22,23,24,26]. To the best of our knowledge, this is the first dataset that features continuous and simultaneous annotation of valence and arousal, and as such can be useful to the wider Psychology and AC communities

Methods
Findings
Code Availability
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.