Abstract

ABSTRACT The mounting evidence of unintended harmful social consequences of automated algorithmic decision-making (AADM), powered by AI and big data, in transformative services (e.g., welfare services), is startling. The algorithmic harm experienced by individuals, communities and society-at-large involves new injustice claims and disputes that go beyond issues of social justice. Drawing from the theory of “abnormal justice” in this paper we articulate a new theory of algorithmic justice that addresses the questions: WHAT is the matter of algorithmic justice? WHO counts as a subject of algorithmic justice? HOW are algorithmic justices performed? and How to address and resolve disputes about the WHAT, WHO and HOW of algorithmic justice? We illustrate the theory of algorithmic justice by drawing from a case of AADM in social welfare services, widely adopted by governments around the world. Our research points to datafication, technological inscribing and the systemic nature of injustices as important IS-specific aspects of algorithmic justice. Our main practical contribution comes from the articulation of algorithmic justice as a framework that (1) makes visible the injustices related to the “what”, “who”, and “how” of AADM in transformative services, and (2) provides further insights into how we might address and resolve these algorithmic injustices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call