Abstract

Businesses increasingly rely on algorithms that are data-trained sets of decision rules in order to implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.” Our contention is that we can address much of the problem of algorithmic transparency by rethinking the right to informed consent in the age of artificial intelligence. It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is a static and complete transaction with individual autonomy as its moral foundation. Such a view is insufficient, especially when data is used in a secondary, non-contextual, and unpredictable manner—which is the inescapable nature of advanced AI systems. We submit that an alternative view of informed consent—as an assurance of trust for incomplete transactions—allows for an understanding of why the rationale of informed consent already entails a right to ex post explanation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call