Abstract
To achieve effective coordination in human–robot teams, robots must have an accurate model of human decision-making (i.e., theory of mind) to predict human actions and plan appropriate supplemental strategies. However, humans are often assumed to be approximately rational decision-makers which is not always the case, especially when faced with risky or uncertain decisions. Recent works in human–robot interaction have begun to address this by implementing risk-sensitive models of human behavior to characterize an individual’s risk-sensitivity. However, little attention is given to the following question: what happens when the robot makes the incorrect inference about human risk-sensitivity? Failure to consider this may lead to ineffective coordination and degradation of perceived trustworthiness in the robot. In this article, we adopt a popular risk-sensitive model based on Cumulative Prospect Theory, where model accuracy is varied in the robot’s theory of mind when interacting with either a risk-averse (pessimistic) or risk-seeking (optimistic) human. We designed a joint-pursuit game where the human and robot are conditioned with different (assumptions of) human risk-sensitivity in a 2 \( \times \) 2, between-subject study. Results from both simulated and human experiments showed that team performance was decreased and perceived trustworthiness in the robot was negatively impacted when the robot made the incorrect assumption of human risk-sensitivity. Overall, this work shows that risk-sensitive models can be used to great effect but only if we remain diligent to the fact that misspecification can lead to negative outcomes.
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have