Abstract

Trust is vital to promoting human and robot collaboration, but like human teammates, robots make mistakes that undermine trust. As a result, a human’s perception of his or her robot teammate’s trustworthiness can dramatically decrease [1], [2], [3], [4]. Trustworthiness consists of three distinct dimensions: ability (i.e. competency), benevolence (i.e. concern for the trustor) and integrity (i.e. honesty) [5], [6]. Taken together, decreases in trustworthiness decreases trust in the robot [7]. To address this, we conducted a 2 (high vs. low anthropomorphism) x 4 (trust repair strategies) between-subjects experiment. Preliminary results of the first 164 participants (between 19 and 24 per cell) highlight which repair strategies are effective relative to ability, integrity and benevolence and the robot’s anthropomorphism. Overall, this paper contributes to the HRI trust repair literature.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.