Abstract

Abstract There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human–robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI.

Highlights

  • We live in a time of increasing reliance on algorithmic systems for a multitude of activities

  • We offered an overview of pressing topics in the context of overtrust and robots

  • We mobilized insights solicited from in-depth conversations from a multidisciplinary workshop on the topic, held at a leading robotics conference

Read more

Summary

Introduction

We live in a time of increasing reliance on algorithmic systems for a multitude of activities. The workshop was held online through Zoom and focused on sharing knowledge about how to understand and address overtrust in robots It brought together a group of researchers and practitioners from various disciplines, including law, ethics, communication, philosophy, science and technology studies, cognitive anthropology, computer science, and robotics engineering. The topics covered the black box problem/transparency, the notions of overtrust leading to over-reliance, liability risks, deception, the role of robots in human participant research, and a spotlight on the Wizard of Oz (WoZ) technique. The workshop participants investigated what happens when trust becomes overtrust, and automation bias, erroneous belief in technological capabilities, seeps in. Each participant was given a chance to respond to the comments of the others During these discussions, the moderators’ task was to distil and re-group common issues that were raised to determine a research agenda that covered each participant’s central topics. We combine a summary of important discussions during the workshop and identify research gaps

What we know about overtrust and robots
Overtrust research agenda 2021
The need for multidisciplinary definitions
Deception by robots
Does anthropomorphization lead to greater overtrust?
Mapping the ecosystem
Overtrust and liability
Education and literacy
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call