Abstract

In recent years, several new technical methods have been developed to make AI-models more transparent and interpretable. These techniques are often referred to collectively as ‘AI explainability’ or ‘XAI’ methods. This paper presents an overview of XAI methods, and links them to stakeholder purposes for seeking an explanation. Because the underlying stakeholder purposes are broadly ethical in nature, we see this analysis as a contribution towards bringing together the technical and ethical dimensions of XAI. We emphasize that use of XAI methods must be linked to explanations of human decisions made during the development life cycle. Situated within that wider accountability framework, our analysis may offer a helpful starting point for designers, safety engineers, service providers and regulators who need to make practical judgements about which XAI methods to employ or to require.This article is part of the theme issue ‘Towards symbiotic autonomous systems’.

Highlights

  • Artificial intelligence (AI)—machine learning (ML)—is being used in ‘critical’ systems

  • ML-based systems are already being used in situations that can have an effect on human wellbeing, life and liberty

  • They sit within a wider accountability framework, in which human decision-makers are still required to give the normative reasons or justifications for the ML-models

Read more

Summary

Introduction

Artificial intelligence (AI)—machine learning (ML)—is being used in ‘critical’ systems. Critical systems directly affect human wellbeing, life or liberty. These may be digital systems (such as those that are used by human experts to inform decisions regarding medical treatment or prison sentences) or embodied autonomous systems (such as highly automated cars or unmanned aerial vehicles).

Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call