Abstract

AbstractIt is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires deliberative capacities uniquely available to human agents. After exploring the limitations of existing formal algorithmic fairness strategies, the article argues that procedural justice requires that human agents relying wholly or in part on algorithmic systems proceed with caution: by avoiding doxastic negligence about algorithmic outputs, by exercising deliberative capacities when making similarity judgments, and by suspending belief and gathering additional information in light of higher-order uncertainty.

Highlights

  • Public and private sector entities are increasingly delegating decision-making to algorithmic systems to make predictions about our creditworthiness, our propensity for criminal behaviour, our access to welfare benefits and services, our prospective academic outcomes, or our expected job performance if hired, to name just a few examples

  • Much recent work on algorithmic fairness has explored this phenomenon from the point of view of substantive justice—often understood in terms of fair distributions of outcomes—while assuming that algorithmic systems are at least procedurally just. We question the latter assumption and argue that in contexts of pervasive structural injustice, algorithmic systems fail a necessary condition of procedural justice: the Like Cases Maxim (LCM), which holds that individuals with morally equivalent sets of features should receive the same treatment

  • Given that procedural justice necessarily requires—as we have argued in section 1—that we treat similar cases it is clear that blindness does not satisfy this necessary condition: we cannot treat similar cases if we do not know which individuals are truly positioned, factoring in the extent to which their respective advantaged and disadvantaged social positions have been shaped by structural injustice

Read more

Summary

Procedural justice in a structurally unjust world

Structural injustice exists when institutions and social practices harm groups of individuals by creating and reifying social positions that are associated with complex advantages and disadvantages within a larger-scale framework of social relations (Young 2011, 39). Recent applications of artificial intelligence in public- and private-sector decision-making—such as algorithmic recidivism risk prediction in a criminal justice context (Angwin et al 2016), the algorithmic allocation of welfare benefits and services (Brown et al 2019), or algorithmic rankings of applicants during university admissions and hiring processes (Raghavan et al 2020)—are paradigmatic examples of procedures that are unjust in this “wide” sense While such systems can be statistically powerful—they are often able to yield sufficiently accurate outputs concerning individuals subject to algorithmic procedures—they fail to attend to how, in a nonideal context, outputs that are sufficiently accurate on an individual level can still entrench structural disadvantages linked to social group membership. Our point is that if human decision-makers decide to use algorithmic systems in a given domain, algorithmic procedures must be designed so as to capture less readily apparent but justicerelevant features shaped by structural injustice

A deceptively simple example
Insufficiently Informative Input strategies
How to proceed with caution
How much is at stake for decision-maker A
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.