The social robots market will grow considerably in the coming years. What the arrival of these new kind of social agents means for society, however, is largely unknown. Existing cases of robot abuse point to risks of introducing such artificial social agents (ASAs) without considerations about consequences (risks for the robots and the human witnesses to the abuse). We believe that humans react aggressively towards ASAs when they are enticed into establishing dominance hierarchies. This happens when there is a basis for skill comparison. We therefore presented pairs of robots on which we varied similarity and the degree of stimulatability of their mechanisms/functions with the human body (walking, jumping = simulatable; rolling, floating = non-simulatable). We asked which robot (i) resembled more a human, (ii) possessed more "essentialized human qualities" (e.g. creativity). To estimate social acceptability, participants had also (iii) to predict the outcome of a situation where a robot approached a group of humans. For robots with simulatable functions, rating of essentialized human qualities decreased as human resemblance decreased (jumper < walker). For robots with non-simulable functions, the reversed relation was seen: robots that least resembled humans (floater) scored highest in qualities. Critically, robot's acceptability followed ratings of essentialized human qualities. Humans respond socially to certain morphological (physical aspects) and behavioral cues. Therefore, unless ASAs perfectly mimic humans, it is safer to provide them with mechanisms/functions that cannot be simulated with the human body.
Read full abstract