Abstract

The increasing use of robots in social applications requires further research on human-robot trust. The research on human-robot trust needs to go beyond the conventional definition that mainly focuses on how human-robot relations are influenced by robot performance. The emerging field of social robotics considers optimizing a robot’s personality a critical factor in user perceptions of experienced human-robot interaction (HRI). Researchers have developed trust scales that account for different dimensions of trust in HRI. These trust scales consider one performance aspect (i.e., the trust in an agent’s competence to perform a given task and their proficiency in executing the task accurately) and one moral aspect (i.e., trust in an agent’s honesty in fulfilling their stated commitments or promises) for human-robot trust. The question that arises here is to what extent do these trust aspects affect human trust in a robot? The main goal of this study is to investigate whether a robot’s undesirable behavior due to the performance trust violation would affect human trust differently than another similar undesirable behavior due to a moral trust violation. We designed and implemented an online human-robot collaborative search task that allows distinguishing between performance and moral trust violations by a robot. We ran these experiments on Prolific and recruited 100 participants for this study. Our results showed that a moral trust violation by a robot affects human trust more severely than a performance trust violation with the same magnitude and consequences.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call