Abstract

AbstractThis article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of whatarbitrarinessmeans in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain why this systemic exclusion is of moral concern and to offer a solution to address it.

Highlights

  • A hiring manager faces a towering stack of resumes from which she must choose a short list of candidates to interview

  • For example, might choose to interview only candidates wearing purple shoelaces or hire only those who enjoy puns. Is such arbitrariness of moral concern? This is the first question we address, focusing on algorithmic decision-making in domains that do not provide specific criteria to guide decision-making

  • We argue that the arbitrariness of algorithms is not of moral concern and that the systemic exclusion they often yield is

Read more

Summary

Introduction

A hiring manager faces a towering stack of resumes from which she must choose a short list of candidates to interview. Many companies and state agencies rush to the same private providers: over one-third of the Fortune 100 companies use the same automated candidate screener, Hirevue (Green 2021) Given this standardization, one algorithmic decision-making system can replace or influence thousands of unique human deciders. For example, might choose to interview only candidates wearing purple shoelaces or hire only those who enjoy puns The single automated decision-making system that replaces a thousand human decision makers has its own set of good, biased, and arbitrary criteria. Automated decision-making systems that make uniform judgments across broad swathes of a sector, which we might call algorithmic leviathans for short, limit people’s opportunities in ways that are morally troubling.

28 Kathleen Creel and Deborah Hellman
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call