Abstract

The black sheep effect (BSE) describes the evaluative upgrading of norm-compliant group members (ingroup bias), and evaluative downgrading of deviant (norm-violating) group members, relative to similar outgroup members. While the BSE has been demonstrated extensively in human groups, it has yet to be shown in groups containing robots. This study investigated whether a BSE towards a ‘deviant’ robot (one low on warmth and competence) could be demonstrated. Participants performed a visual tracking task in a team with two humanoid NAO robots, with one robot being an ingroup member and the other an outgroup member. The robots offered advice to the participants which could be accepted or rejected, proving a measure of trust. Both robots were also evaluated using questionnaires, proxemics, and forced preference choices. Experiment 1 (N = 18) manipulated robot grouping to test our group manipulation generated ingroup bias (a necessary precursor to the BSE) which was supported. Experiment 2 (N = 72) manipulated the grouping, warmth and competence of both robots, predicting a BSE towards deviant ingroup robots, which was supported. Results indicated that a disagreeable ingroup robot is viewed less favourably than a disagreeable outgroup robot. Furthermore, when interacting with two independent robots, a “majority rule” effect can occur in which each robot’s opinion is treated as independent vote, with participants significantly more likely to trust two unanimously disagreeing robots. No effect of warmth was found. The impact of these findings for human-robot team composition are discussed.

Highlights

  • The ability of humans to work effectively in groups is a fundamental aspect of human life, allowing for civilised and productive society, while selectively bestowing survival advantages upon stronger and more cohesive collectives

  • Hypothesis 1 predicted participants would more frequently select ingroup robot answers compared to outgroup robot answers

  • There was no difference found in participant trust towards the ingroup versus the outgroup robot, an instead a majority rules effect occurred

Read more

Summary

Introduction

The ability of humans to work effectively in groups is a fundamental aspect of human life, allowing for civilised and productive society, while selectively bestowing survival advantages upon stronger and more cohesive collectives. Traditional human working groups are becoming increasingly interspersed with artificial agents, in fields such as healthcare, the military, and transportation. Managers are more likely to hire employees that are like themselves [2], an effect known as similarity or affinity bias. These tendencies to positively evaluate ourselves and fellow group members are the crux of ingroup bias, in which people generally favour and prioritise members of their own group (the ingroup), rating them more capable, friendly, and altruistic than corresponding members of another group (the outgroup) [3]. Ingroup favouritism is considered a factor in issues ranging from prejudice and racism, to social and economic disadvantage of minority groups [4]

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call