Abstract

A fascinating challenge in the field of human–robot interaction is the possibility to endow robots with emotional intelligence in order to make the interaction more intuitive, genuine, and natural. To achieve this, a critical point is the capability of the robot to infer and interpret human emotions. Emotion recognition has been widely explored in the broader fields of human–machine interaction and affective computing. Here, we report recent advances in emotion recognition, with particular regard to the human–robot interaction context. Our aim is to review the state of the art of currently adopted emotional models, interaction modalities, and classification strategies and offer our point of view on future developments and critical issues. We focus on facial expressions, body poses and kinematics, voice, brain activity, and peripheral physiological responses, also providing a list of available datasets containing data from these modalities.

Highlights

  • Emotions are fundamental aspects of the human being and affect decisions and actions

  • As it is possible to observe by our brief summary about the state of the art, emotion recognition (ER) is feasible by collecting different kinds of data

  • Some modalities have been widely explored, both in a broader human-machine interaction (HMI) context and for human-robot interaction (HRI) (FER, Emotional body gesture recognition (EBGR)), others should be deeper investigated because, at present, ER has not been tested enough in HRI applications (EEG) or because existing HRI field tests are focused on narrow aspects of emotions

Read more

Summary

INTRODUCTION

Emotions are fundamental aspects of the human being and affect decisions and actions. We subsequently selected published articles that addressed ER in HRI, reporting significant results with respect to the recent literature, and that (a) performed emotion recognition in an actual HRI (i.e., where at least a physical robot and a subject were included in the testing phase), reporting results; (b) were focused on modalities that could be acquired, during HRI, by using both robot’s embedded sensors or external devices: facial expression, body pose and kinematics, voice, brain activity, and peripheral physiological responses; and (c) relied on either discrete or dimensional models of emotions (see section 2.1). We organized the resulting articles by considering modalities and emotional models

Emotional Models
Facial Expressions
Discrete Models
Thermal Facial Images
Body Pose and Kinematics
Dimensional Models
Brain Activity
Pheripheral Physiological Responses and Multimodal Approaches
Findings
DISCUSSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call