Abstract

Past research indicates that people favor, and behave more morally toward, human ingroup than outgroup members. People showed a similar pattern for responses toward robots. However, participants favored ingroup humans more than ingroup robots. In this study, I examine if robot anthropomorphism can decrease differences between humans and robots on ingroup favoritism. This paper presents a 2 × 2 × 2 mixed-design experimental study with participants (N = 81) competing on teams of humans and robots. I examined how people morally behaved toward and perceived players depending on players’ Group Membership (ingroup, outgroup), Agent Type (human, robot), and Robot Anthropomorphism (anthropomorphic, mechanomorphic). Results replicated prior findings that participants favored the ingroup over the outgroup and humans over robots—to the extent that they favored ingroup robots over outgroup humans. This paper also includes novel results indicating that patterns of responses toward humans were more closely mirrored by anthropomorphic than mechanomorphic robots.

Highlights

  • Robots are becoming increasingly prevalent, behind the scenes and as members of human teams

  • I examine in prior findings that the difference between ingroup humans and robots is greater than the difference between outgroup humans and robots (H5) depends on robot anthropomorphism: H8: Differences in ratings of ingroup humans and mechanomorphic robots will be larger than differences of ingroup humans and anthropomorphic robots, which will be larger than differences in ratings of outgroup humans and robots

  • To examine these according to the hypotheses, I used 2 (Player: igR/ogH) × 2 (Robot Anthropomorphism anthropomorphic/ mechanomorphic) ANOVAs to examine if: H3

Read more

Summary

INTRODUCTION

Robots are becoming increasingly prevalent, behind the scenes and as members of human teams. The results indicate how robot anthropomorphism moderates effects of group membership on survey and behavioral favoritism of ingroup and outgroup humans and robots. These results have moral implications: If participants are willing to give painful noise blasts to humans in order to spare their robot teammates, what else might they be willing to do?. People are more likely to cooperate with ingroup members (Tajfel et al, 1971; Turner et al, 1987), favor them morally (Leidner and Castano, 2012), and anthropomorphize them (i.e., humanize them; Haslam et al, 2008) This is a type of intergroup behavior. I examine a divergence in group-related responses toward humans and robots and some possible explanations

RELATED WORK
Design
Participants
Procedure
RESULTS
DISCUSSION
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.