Abstract

Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can be derived from the demands of Rawlsian public reason. In the second part of the paper, I try to show that the argument from the limitations of human cognition fails to get AI off the hook of public reason. Against a growing trend in AI ethics, my main argument is that the analogy between human minds and artificial neural networks fails because it suffers from an atomistic bias which makes it blind to the social and institutional dimension of human reasoning processes. I suggest that developing interpretive AI algorithms is not the only possible answer to the explainability problem; social and institutional answers are also available and in many cases more trustworthy than techno-scientific ones.

Highlights

  • It is widely recognized that the deployment of machine learning-based artificial intelligence systems in all spheres of human life brings with it a host of thorny ethical quandaries that occupy researchers and policy makers

  • AI enthusiasts are quick to ask whether it would be ethically right to forgo the benefits ensuing from delegating tasks to AI systems on the basis of their opacity

  • As Hinton rhetorically asked in a tweet: “Suppose you have cancer and you have to choose between a black box AI surgeon that cannot explain how it works but has a 90% cure rate and a human surgeon with an 80% cure rate

Read more

Summary

Introduction

It is widely recognized that the deployment of machine learning-based artificial intelligence systems in all spheres of human life brings with it a host of thorny ethical quandaries that occupy researchers and policy makers. AI algorithms can invade our privacy by inferring information about aspects of ourselves that we did not wish to disclose by correlating data points that are not legally considered as personal information (Wachter & Mittlestadt, 2019) Even those who endorse a nuanced and prudent view of AI’s capacities and foreseeable impacts on human life believe that the new wave of automation which will be enabled by AI is likely to exacerbate existing inequalities (James, 2020). I will try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions This legal duty can be derived from the demands of Rawlsian public reason.

Machine Learning’s Explainability Problem
AI’s Public Reason Deficit
Minds and Machines
Social Reasoning and Institutional Facts
The Explainability Requirement in Practice
A Social and Institutional Approach to the Explainability Problem
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call