Abstract

Mind perception is a fundamental part of anthropomorphism and has recently been suggested to be a dual process. The current research studied the influence of implicit and explicit mind perception on a robot’s right to be protected from abuse, both in terms of participants condemning abuse that befell the robot as well as in terms of participants’ tendency to humiliate the robot themselves. Results indicated that acceptability of robot abuse can be manipulated through explicit mind perception, yet are inconclusive about the influence of implicit mind perception. Interestingly, explicit attribution of mind to the robot did not make people less likely to mistreat the robot. This suggests that the relationship between a robot’s perceived mind and right to protection is far from straightforward, and has implications for researchers and engineers who want to tackle the issue of robot abuse.

Highlights

  • Humans tend to automatically ascribe social robots a certain scope of cognitive and emotional abilities. The consequences of this mind perception can be observed in human behaviour during human-robot interaction (HRI): humans tend to be polite to a robot [46] and have been recorded trying to keep it safe from harm [14]

  • To test the influence of implicit and explicit mind perception on acceptability of robot mistreatment, a 2×3 ANOVA with ‘condemnation’ as dependent variable was conducted

  • Explicit information of robot mind attribution clearly affects its right to protection, yet the data and analyses at hand are insufficient to conclude that implicit cues trigger implicit mind attribution

Read more

Summary

Introduction

Humans tend to automatically ascribe social robots a certain scope of cognitive and emotional abilities. As Salvini et al [50] remarked, this behaviour appeared to be motivated by the wish to engage with the robot in a social way (albeit negative) rather than representing acts of vandalism As a consequence, they labelled the behaviour robot bullying, a term later adopted by other HRI researchers [30,32,42,56]. They labelled the behaviour robot bullying, a term later adopted by other HRI researchers [30,32,42,56] This robot is programmed to appear to be a social being: it is capable of processing, interpreting and calculating an emotional response to its environment. Being a robot, it did not feel excited or nervous

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.