Abstract
This article considers the ways that explainable AI can be used to help secure human-interactive robots. To do so, we acknowledge that robots interact with a variety of people. For example, some people may operate robots that perform tasks in their homes or offices, while other people may be tasked with defending robots from potential attackers. We describe how explainable AI can be used to help the human operators of robots appropriately calibrate the trust they have in their systems, and we demonstrate this through an implementation. We also describe a novel generalizable human-in-the-loop framework based on control loops to characterize and explain attacks on robots to a robot defender. We explore the utility of such a framework through an analysis of its application in the incident management process, applied to robots. This framework allows formal definition of explainability, and the necessary condition for explainability in robots. The overarching goal of this article is to introduce the application of explainability for security of robotics as a novel area of research, therefore, we also discuss several open research problems we uncovered while applying explainable AI to security of robots.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Human–Computer Interaction
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.