Abstract

In human-robot interaction studies, trust is often defined as a process whereby a trustor makes themselves vulnerable to a trustee. The role of vulnerability however is often overlooked in this process but could play an important role in the gaining and maintenance of trust between users and robots. To better understand how vulnerability affects human-robot trust, we first reviewed the literature to create a conceptual model of vulnerability with four vulnerability categories. We then performed a meta-analysis, first to check the overall contribution of the variables included on trust. The results showed that overall, the variables investigated in our sample of studies have a positive impact on trust. We then conducted two multilevel moderator analysis to assess the effect of vulnerability on trust, including: 1) An intercept model that considers the relationship between our vulnerability categories; and 2) A non-intercept model that treats each vulnerability category as an independent predictor. Only model 2 was significant, suggesting that to build trust effectively, research should focus on improving robot performance in situations where the users is unsure how reliable the robot will be. As our vulnerability variable is derived from studies of human-robot interaction and human-human studies of risk, we relate our findings to these domains and make suggestions for future research avenues.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call