Abstract
This article gives a legal-conceptual analysis of the use of counterfactuals (what-if explanations) as transparency tools for automated decision making (ADM). The first part of the analysis discusses three notions: transparency, ADM and generative artificial intelligence (AI). The second part of this article takes a closer look at the pros and cons of counterfactuals in making ADM explainable. Transparency is only useful if it is actionable, that is, if it challenges systemic bias or unjustified decisions. Existing ways of providing transparency about ADM systems are often limited in terms of being actionable. In contrast to many existing transparency tools, counterfactual explanations hold the promise of providing actionable and individually tailored transparency while not revealing too much of the model (attractive if ADM is a trade secret or if it is important that the system cannot be gamed). Another strength of counterfactuals is that they show that transparency should not be understood as the immediate visibility of some underlying truth. While promising, counterfactuals have their limitations. Firstly, there is always a multiplicity of counterfactuals (the Rashomon effect). Secondly, counterfactual explanations are not natural givens. Instead, they are constructed and the many underlying design decisions can turn out for better or worse.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.