Abstract

People perceive the mind in two dimensions: intellectual and affective. Advances in artificial intelligence enable people to perceive the intellectual mind of a robot through their semantic interactions. Conversely, it has been still controversial whether a robot has an affective mind of its own without any intellectual actions or semantic interactions. We investigated pain experiences when observing three different facial expressions of a virtual agent modeling affective minds (i.e., painful, unhappy, and neutral). The cold pain detection threshold of 19 healthy subjects was measured as they watched a black screen, then changes in their cold pain detection thresholds were evaluated as they watched the facial expressions. Subjects were asked to rate the pain intensity from the respective facial expressions. Changes of cold pain detection thresholds were compared and adjusted by the respective pain intensities. Only when watching the painful expression of a virtual agent did, the cold pain detection threshold increase significantly. By directly evaluating intuitive pain responses when observing facial expressions of a virtual agent, we found that we ‘share’ empathic neural responses, which can be intuitively emerge, according to observed pain intensity with a robot (a virtual agent).

Highlights

  • People perceive the mind in two dimensions: intellectual and affective

  • We investigated whether pain experiences are modulated intuitively and unconsciously when observing the facial expressions of a virtual agent modeling affective minds

  • Empathy is observed in pain [10]

Read more

Summary

Introduction

“Does a robot (a virtual agent) have a mind of its own?” This question has long been debated. Supported by remarkable improvements and advances in artificial intelligence, the answer “Yes” has recently become the dominant view, mainly in engineering Researchers in this field have eagerly adopted neuro-computing to understand learning systems, model networks, and transform information from various modalities with respect to computational neuroscience. Specifying the intellectual mind, some researchers argue that people merely create semantic meanings for actions by a robot on their own and it is undefined whether a robot really has a mind of its own. According to this argument, the affective mind of a robot might be semantically created through people’s cognitive processes based on their experiences to date. We investigated whether pain experiences are modulated intuitively and unconsciously when observing the facial expressions of a virtual agent modeling affective minds

Measurement
Facial Expressions of a Virtual Agent
Statistical Analysis
Results
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call