Abstract

AbstractThis article discusses the fundamental requirements for making explainable robots trustworthy and comprehensible for non-expert users. To this extent, we identify three main issues to solve: the approximate nature of explanations, their dependence on the interaction context and the intrinsic limitations of human understanding. The article proposes an organic solution for the design of explainable robots rooted in a sensemaking perspective. The establishment of contextual interaction boundaries, combined with the adoption of plausibility as the main criterion for the evaluation of explanations and of interactive and multi-modal explanations, forms the core of this proposal.

Highlights

  • Assistive robots are progressively spreading to many fields of application, which include health care, education and personal services [1,2,3]

  • Researchers agree on the fact that social robots and other artificial social agents should display some degree of interpretability in order to be understood, trusted and, used

  • The extension of the concept of explainability to robotic technologies, especially in the forms that are meant to be used in social contexts, calls for the connection with the study of human–robot interaction (HRI) [19,20]

Read more

Summary

Introduction

Assistive robots are progressively spreading to many fields of application, which include health care, education and personal services [1,2,3]. Whereas assistive robots must prove useful and beneficial for the users, at the same time their decisions, recommendations and decisions need to be understandable. Researchers agree on the fact that social robots and other artificial social agents should display some degree of interpretability in order to be understood, trusted and, used

The interdisciplinary challenge of explainable robots
14 Guglielmo Papagni and Sabine Koeszegi
Making social robots explainable and trustworthy: a sensemaking approach
Trusting explainable robots
Forms of interpretability: are explanations always needed?
Direct interpretability
Explanations as approximations
Limits of understanding
The problem of introspection
Context dependence
Different contexts imply different explanations
Users as novices and contextual boundaries
Explainable robots in the wild
Non-verbal cues
Plausibility over accuracy
Explanatory qualities
Interactive and iterative explanations
Context consideration in interactive explanations
Anomaly detection
Explanations and argumentation
From explanation to examination
Examination of robotic explanations
Issues of interactive explanations
Questioning the explainee
Multimodal explanations and the problem of the “failure cycle”
Alternative verbal strategies
Combined signals
Conclusions and limitations

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.