Abstract

Many advocate for artificial agents to be empathic. Crowdsourcing could help, by facilitating human-in-the-loop approaches and data set creation for visual emotion recognition algorithms. Although crowdsourcing has been employed successfully for a range of tasks, it is not clear how effective crowdsourcing is when the task involves subjective rating of emotions. We examined relationships between demographics, empathy, and ethnic identity in pain emotion recognition tasks. Amazon MTurkers viewed images of strangers in painful settings, and tagged subjects’ emotions. They rated their level of pain arousal and confidence in their responses, and completed tests to gauge trait empathy and ethnic identity. We found that Caucasian participants were less confident than others, even when viewing other Caucasians in pain. Gender correlated to word choices for describing images, though not to pain arousal or confidence. The results underscore the need for verified information on crowdworkers, to harness diversity effectively for metadata generation tasks.

Highlights

  • Many advocate for artificial agents and systems to be more empathic in their interactions with humans

  • As we aimed to examine the affective content of individual word tags, and their correlation to participants’ demographics and personal characteristics, we selected a lexicon developed by a team of psycholinguists, which aims to depict the affective norms of individual words [67], which is close in spirit to our task

  • It is notable that empathic concern (EC) plays a key role in explaining the variance of both pain arousal and task confidence, even when we control for gender, which we found to be highly correlated to EC and Personal Distress (PD)

Read more

Summary

Introduction

Many advocate for artificial agents and systems to be more empathic in their interactions with humans. Machines that can recognize emotions stand to play a significant role in the development of next-generation human–computer interaction systems [10, 14, 34, 63]. With the emergence of social media, users are uploading millions of pictures everyday trying to express their emotions and thoughts with others. Imagine a robot designed to offer communication support to an individual with depression, which embeds emotion recognition technology. In such a context, the costs of misrecognition of the user’s distress are very high, with many potential consequences. If the robot was to misrecognize a sentiment such as disgust for sadness or hostility, this could lead to inappropriate responses on behalf of the agent, such as offending a patient who may already be in a sensitive state

Objectives
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.