Abstract

This paper contributes to debates over algorithmic discrimination with particular attention to structural theories of racism and the problem of “proxy discrimination”—discriminatory effects that arise even when an algorithm has no information about socially sensitive characteristics such as race. Structural theories emphasize the ways that unequal power structures contribute to the subordination of marginalized groups: these theories thus understand racism in ways that go beyond individual choices and bad intentions. Our question is, how should a structural understanding of racism and oppression inform our understanding of algorithmic discrimination and its associated norms? Some responses to the problem of proxy discrimination focus on fairness as a form of “parity,” aiming to equalize metrics between individuals or groups—looking, for example, for equal rates of accurate and inaccurate predictions between one group and another. We argue that from the perspective of structural theories, fairness-as-parity is inapt in the algorithmic context; instead, we should be considering social impact—whether a use of an algorithm perpetuates or mitigates existing social stratification. Our contribution thus offers a new understanding of what algorithmic racial discrimination is.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call