Abstract
Businesses increasingly rely on algorithms that are data-trained sets of decision rules (i.e., the output of the processes often called “machine learning”) and implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.” It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is a static and complete transaction. Such a view is insufficient, especially when data are used in a secondary, noncontextual, and unpredictable manner—which is the inescapable nature of advanced artificial intelligence systems. We submit that an alternative view of informed consent—as an assurance of trust for incomplete transactions—allows for an understanding of why the rationale of informed consent already entails a right to ex post explanation.
Highlights
Businesses increasingly rely on algorithms that are data-trained sets of decision rules and implement decisions with little or no human intermediation
We provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.”
Business Ethics Quarterly autonomous decision algorithms and their reliance on user-provided data, a growing number of computer scientists and governmental bodies have called for transparency under the broad concept of “algorithmic accountability.”[3]. In particular, the European Parliament and the Council of the European Union adopted the General Data Protection Regulation 2016(679),[4] part of which regulates the uses of automated algorithmic decision systems
Summary
An algorithm is a set of rules and procedures that leads to a decision. Businesses have been using algorithms for a long time. Equal treatment is an important moral value upholding a right to explanation for cases like credit card limits or approval of loan applications. We will use more generic scenarios (e.g., targeted advertising, whether commercial or political) as well as card/loan applications. When users see targeted advertisments on Facebook and click “Why am I seeing this ad?” they are transported to a page explaining, for instance, that “You are on a list that Organization X uploaded on October 2nd.” This is better than nothing, but not specific enough to be meaningful. Users do not know how their personal information is traded from one platform to another, how their identities are profiled for commercial or political advertisments (e.g., Facebook and Cambridge Analytica16), or how their online profiles eventually influence how they think about themselves.[17]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have