Abstract

Typically, humans interact with a humanoid robot with apprehension. This lack of trust can seriously affect the effectiveness of a team of robots and humans. We can create effective interactions that generate trust by augmenting robots with an explanation capability. The explanations provide justification and transparency to the robot’s decisions. To demonstrate such effective interaction, we tested this with an interactive, game-playing environment with partial information that requires team collaboration, using a game called Spanish Domino. We partner a robot with a human to form a pair, and this team opposes a team of two humans. We performed a user study with sixty-three human participants in different settings, investigating the effect of the robot’s explanations on the humans’ trust and perception of the robot’s behaviour. Our explanation-generation mechanism produces natural-language sentences that translate the decision taken by the robot into human-understandable terms. We video-recorded all interactions to analyse factors such as the participants’ relational behaviours with the robot, and we also used questionnaires to measure the participants’ explicit trust in the robot. Overall, our main results demonstrate that explanations enhanced the participants’ understandability of the robot’s decisions, because we observed a significant increase in the participants’ level of trust in their robotic partner. These results suggest that explanations, stating the reason(s) for a decision, combined with the transparency of the decision-making process, facilitate collaborative human–humanoid interactions.

Highlights

  • We examined the influence of previous experience with robots, and pet-ownership on the human participants’ trust in the robot

  • The difference in the trust levels in different settings suggested that human participants trusted the robot when playing Domino, and the trust levels were upgraded after playing the game

  • We observed that the trust levels were impacted by the game result (Section 5.5.1), which indicated that, in a team-based environment, winning or losing a game destabilised the perception of trust in the robotic game player

Read more

Summary

Introduction

Social robots are deployed in human environments, such as in hotels, shops, hospitals, and in roles as co-workers. Robots are expected to cooperate and contribute productively with humans as teammates. The technical abilities of robotic systems have immensely improved, which has led to an increase in the autonomy and functional abilities of existing robots [1]. As robots’ abilities increase, their complexity increases, but the increased ability of the robot often fails to improve the competency of a human–robot team [2]. Effective teamwork between humans and robots requires trust

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call