Abstract

Social robots designed to live and work with humans will have to recognize, learn from, and adapt to multiple users, since humans live and organize themselves in groups. Social robots must consider the social dynamics that arise when humans interact in groups as well as the social consequences of their own behaviour in these groups. When trying to automatically adapt to its users, a robot might unintentionally favour one human group member. For instance, when in a work setting, a robot’s implemented goal is to maximize team performance, it might decide to distribute more resources to those team members who are identified as high performers in the task, thereby discriminating low performers. Algorithm-based learning and decision-making can result in unequal treatment, intergroup bias and social exclusion of team members with severe negative outcomes for the emotional state of the individual and the social dynamics in the group. In this paper, we advocate for systematically investigating ingroup identification and intergroup bias in human-robot group interactions and their possible negative effects for individuals such as feelings of rejection, social exclusion, and ostracism. We review theories from social psychology on groups and outline future research lines to investigate social dynamics in human-robot mixed teams from the perspectives of psychology and computer science.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.