Abstract

Trust in automation, or more recently trust in autonomy, has received extensive research attention in the past three decades. The majority of prior literature adopted a “snapshot” view of trust and typically evaluated trust through questionnaires administered at the end of an experiment. This “snapshot” view, however, does not acknowledge that trust is a dynamic variable that can strengthen or decay over time. To fill the research gap, the present study aims to model trust dynamics when a human interacts with a robotic agent over time. The underlying premise of the study is that by interacting with a robotic agent and observing its performance over time, a rational human agent will update his/her trust in the robotic agent accordingly. Based on this premise, we develop a personalized trust prediction model and learn its parameters using Bayesian inference. Our proposed model adheres to three properties of trust dynamics characterizing human agents’ trust development process de facto and thus guarantees high model explicability and generalizability. We tested the proposed method using an existing dataset involving 39 human participants interacting with four drones in a simulated surveillance mission. The proposed method obtained a root mean square error of 0.072, significantly outperforming existing prediction methods. Moreover, we identified three distinct types of trust dynamics, the Bayesian decision maker, the oscillator, and the disbeliever, respectively. This prediction model can be used for the design of individualized and adaptive technologies.

Highlights

  • The use of autonomous and robotic agents to assist humans is expanding rapidly

  • We aim to propose a personalized trust prediction model to predict each individual human agent’s trust dynamics when s/he interacts with a robotic agent over time

  • The trust history of the old agents and the robotic agent’s performance history are fully available for estimating P(θ ); for the new human agent, we assume s/he performs l trials during the personalized training session and thereafter when performing the real tasks s/he reports his or her trust every q trials

Read more

Summary

Introduction

The use of autonomous and robotic agents to assist humans is expanding rapidly. In order for the human–robot team to interact effectively, the human should establish appropriate trust toward the robotic agents [4,5,6,7]. More than two dozen factors have been identified to influence one’s “snapshot” trust in automation. These factors can be broadly categorized into three groups: individual (i.e., the truster) factors, system (i.e., the trustee) factors and environmental factors. System factors include robot’s reliability [25,26], level of autonomy [27], adaptivity [28] and transparency [29], timing and magnitude of robotic errors [9,30], and robot’s physical presence [31], vulnerability [32], and anthropomorphism [33]. Environmental factors include multi-tasking requirements [34] and task emergency [35]

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call