Abstract
As robots gain autonomy, human-robot task delegation can become more goal-oriented; specifying what to do rather than how. This can lead to unexpected robot behaviour. We investigated the effect of transparency and outcome on the perceived trustworthiness of a robot that deviates from the expected manner to reach a delegated goal. Participants (N = 82) engaged in a virtual military mission as a Human-Robot Team using a 2x2 between-subjects design (low vs. high transparency, positive vs. negative outcome). Participants received training on the expected manner to reach the mission’s goal. In the actual mission, the robot deviated from the planned path. We manipulated whether the robot explained its deviation and whether the outcome was better or worse than the original plan. Results showed that transparency contributed to higher and more stable levels of trust, without increasing subjective workload. While the robot’s deviation led to a violation of trust in the low transparency condition, trust remained stable in the high transparency condition, indicating a buffering effect of transparency on trust in case of unexpected behaviour. The impact of outcome on trust was consistent across transparency conditions. Our findings underscore the role of transparency as a tool for fostering human-robot trust.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.