Abstract

Trust is essential in human-robot interactions, and in times where machines are yet to be fully reliable, it is important to study how robotic hardware faults can affect the human counterpart. This experiment builds on a previous research that studied trust changes in a game-like scenario with the humanoid robot iCub. Several robot hardware failures (validated in another online study) were introduced in order to measure changes in trust due to the unreliability of the iCub. A total of 68 participants took part in this study. For half of them, the robot adopted a transparent approach, explaining each failure after it happened. Participants' behaviour was also compared to the 61 participants that played the same game with a fully reliable robot in the previous study. Against all expectations, introducing manifest hardware failures does not seem to significantly affect trust, while transparency mainly deteriorates the quality of interaction with the robot.

Highlights

  • T RUST is fundamental in any interaction between two agents that manifest a certain degree of autonomy

  • A one-way ANOVA followed by Bonferroni post hoc highlights how the time to find the first egg was significantly shorter in TH than in any Unreliable Treasure Hunt (UTH) conditions, whereas there is no significant difference between T and NT (F (2120) = 8.46; p < 0.001)

  • NT resulted to be associated to a task load significantly higher than TH and T ( the latter comparison does not resist Bonferroni correction (Fig. 7). These results suggest that Transparency could lower the task load index to a similar amount as when the robot was not experiencing any faults

Read more

Summary

Introduction

T RUST is fundamental in any interaction between two agents that manifest a certain degree of autonomy. Help is not accepted by a partner who is not trusted. Trust is defined as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability” [1] or as “the reliance by an agent that actions prejudicial to their well-being will not be undertaken by influential others” [2]. For robots to become actual helpers, it is necessary that they become trustworthy. Human partners will not rely on robot support, and artificial agents will risk to remain little more than complex tele-operated tools [3]. Given its centrality for dependable human-robot collaboration, trust has gained particular attention in the community studying natural interactive process between humans and robots

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.