Abstract

Understanding human-robot social comparison is critical for creating psychologically safe robots (i.e., robots that do not cause psychological discomfort). However, there has been limited research examining social comparison processes in human-robot interaction (HRI). We aimed to conceptually replicate prior research suggesting that the Self-Evaluation Maintenance (SEM) model of social comparison applies to HRI. In short, the SEM model describes the mechanisms in which others can impact one’s self-evaluation. We applied the model to an online presentation of a humanoid robot, RUDY. We predicted that task relevance would moderate the relationship between the robot’s performance level and participant evaluations of the robot. When RUDY engaged in a low-relevance task (guessing someone’s age), participants would evaluate RUDY accurately (i.e., they would rate RUDY more positively when it performed well than when it performed poorly). However, when RUDY engaged in a high-relevance task (understanding how people feel), participants would evaluate RUDY inaccurately (i.e., they would rate RUDY negatively regardless of its actual performance). Contrary to our hypothesis, we found that participants in both the high- and low-relevance conditions evaluated RUDY accurately. Our results suggest that SEM effects may not generalize to all types of tasks and robots. A “highly relevant” task might mean something different depending on the exact nature of the human-robot relationship. Given the inconsistency between these findings and past research, discerning the boundary conditions for SEM effects may be crucial for developing psychologically safe robots.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call