Abstract

In this research we investigate the relationship between emotion and cooperation in dialogue tasks. It is an area were still many unsolved questions are present. One of the main open issues is the labeling of blended emotions and their recognition. Usually there is a low agreement among raters in labeling and naming emotions and surprisingly emotion recognition is higher in a condition of modality deprivation (only acoustic or only visual vs. bimodal). Because of this previous results we don't ask raters to directly label emotions, but to use a small set of features (as lips or eyebrows shape) to annotate our corpus. The analyzed materials come from an audiovisual corpus of Map Task dialogues elicited with a script. We point out the emotive tokens by simultaneous recordings of the phsychophysiological indexes (ElectroCardioGram ECG, Galvanic Skin Conductance GSC, ElectroMyoGraphy EMG). After this selection we annotate each token with our multimodal annotation scheme. Each annotation will lead to a cluster of signals identifying the emotion corresponding to a cooperative/non cooperative level; the last step involves agreement among coders and reliability of the emotion description. Future research will deal with brain imaging experiment on the effect of putting emotions into words and the role of context in emotion recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call