Abstract

Explainable Artificial Intelligence (XAI) for Autonomous Surface Vehicles (ASVs) addresses developers’ needs for model interpretation, understandability, and trust. As ASVs approach wide-scale deployment, these needs are expanded to include end user interactions in real-world contexts. Despite recent successes of technology-centered XAI for enhancing the explainability of AI techniques to expert users, these approaches do not necessarily carry over to non-expert end users. Passengers, other vessels, and remote operators will have XAI needs distinct from those of expert users targeted in a traditional technology-centered approach. We formulate a concept called ‘human-centered XAI’ to address emerging end user interaction needs for ASVs. To structure the concept, we adopt a model-based reasoning method for concept formation consisting of three processes: analogy, visualization, and mental simulation, drawing from examples of recent ASV research at the Norwegian University of Science and Technology (NTNU). The examples show how current research activities point to novel ways of addressing XAI needs for distinct end user interactions and underpin the human-centered XAI approach. Findings show how representations of (1) usability, (2) trust, and (3) safety make up the main processes in human-centered XAI. The contribution is the formation of human-centered XAI to help advance the research community’s efforts to expand the agenda of interpretability, understandability, and trust to include end user ASV interactions.

Highlights

  • Artificial Intelligence (AI) is being increasingly used in maritime applications

  • During the early-stage design of the human-machine interface for the milliAmpere2 prototype ferry, it emerged that trust was important for establishing an interaction relationship among passengers

  • We observed that explaining Autonomous Surface Vehicles (ASVs) functionality to potential passengers played a central role in trust (Section 3.1), suggesting that explaining using familiar concepts plays an important role in human-centered XAI

Read more

Summary

Introduction

Artificial Intelligence (AI) is being increasingly used in maritime applications. This is perhaps most clearly seen in Autonomous Surface Vehicles (ASVs), a category of maritime vessels that emerged in oceanographic and marine biological data collection [1,2,3] and has recently branched out into urban mobility [4,5,6] (Figure 1). Work towards scaling MASS and ASVs into widespread use raises new challenges related to ensuring that AI system goals are aligned with the values of those who will be interacting with them This is broadly the motivation behind the growing field of Explainable AI (XAI), characterized, as expressed by [13], by its mission to ‘create AI systems whose learned models and decisions can be understood and appropriately trusted by end users’ While developers focus on explaining AI techniques using mathematics and data visualization, a generalized approach based on leveraging more universal interaction elements, such as analogy-making, design, aesthetics, form, and motion characteristics, was used to reach a wider audience of end users in the case studies we examined These elements were important in establishing an interactive relationship with new end users during their first encounter with an ASV. Explaining the ASV’s safety to users may be more effectively accomplished by comparing it to an existing baseline; for example, whether it can be shown to be significantly safer than a corresponding manned surface vessel

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call