Abstract

In this paper, we present a framework for trust-aware sequential decision-making in a human-robot team wherein the human agent’s trust in the robotic agent is dependent on the reward obtained by the team. We model the problem as a finite-horizon Markov Decision Process with the trust of the human on the robot as a state variable. We develop a reward-based performance metric to drive the trust update model, allowing the robotic agent to make trust-aware recommendations. We conduct a human-subject experiment with a total of 45 participants and analyze how the human agent’s trust evolves over time. Results show that the proposed trust update model is able to accurately capture the human agent’s trust dynamics. Moreover, we cluster the participants’ trust dynamics into three categories, namely, Bayesian decision makers, oscillators, and disbelievers, and identify personal characteristics that could be used to predict which type of trust dynamics a person will belong to. We find that the disbelievers are less extroverted, less agreeable, and have lower expectations toward the robotic agent, compared to the Bayesian decision makers and oscillators. The oscillators tend to get significantly more frustrated than the Bayesian decision makers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call