Abstract

In recent years, fairness in machine learning (ML), artificial intelligence (AI), and algorithmic decision-making systems has emerged as a highly active area of research and development. To date, most measures and methods to mitigate bias and improve fairness in algorithmic systems have been built in isolation from policymaking and civil societal contexts and lack serious engagement with philosophical, political, legal, and economic theories of equality and distributive justice. Many current measures define “fairness” in simple terms to mean narrowing gaps in performance or outcomes between demographic groups while preserving as much of the original system’s accuracy as possible. This oversimplified translation of the complex socio-legal concept of equality into fairness measures is troubling. Many current fairness measures suffer from both fairness and performance degradation—or “leveling down”—where fairness is achieved by making every group worse off or by bringing better-performing groups down to the level of the worst off. Leveling down is a symptom of the decision to measure fairness solely in terms of equality, or disparity between groups in performance and outcomes, that ignores other relevant concerns of distributive justice (e.g., welfare or priority), which are more difficult to quantify and measure. When fairness can only be measured in terms of distribution of performance or outcomes, corrective actions can likewise only target how these goods are distributed between groups. We refer to this trend as “strict egalitarianism by default.” Strict egalitarianism by default runs counter to both the stated objectives of fairness measures and the presumptive aim of the field: to improve outcomes for historically disadvantaged or marginalized groups. When fairness can only be achieved by making everyone worse off in material or relational terms–through injuries of stigma, loss of solidarity, unequal concern, and missed opportunities for substantive equality—something has gone wrong in translating the vague concept of “fairness” into practice. Leveling down should be rejected in fairML because it (1) unnecessarily and arbitrarily harms advantaged groups in cases where performance is intrinsically valuable, such as medical applications of AI; (2) demonstrates a lack of equal concern for affected groups, undermines social solidarity, and contributes to stigmatization; (3) fails to live up to the substantive aims of equality law and fairML and squanders the opportunity afforded by interest in algorithmic fairness to substantively address longstanding social inequalities; and (4) fails to meet the aims of many viable theories of distributive justice including pluralist egalitarian approaches, prioritarianism, sufficientarianism, and others. This paper critically scrutinizes these initial observations to determine how fairML can move beyond mere leveling down and strict egalitarianism by default. We examine the causes and prevalence of leveling down across fairML and explore possible justifications and criticisms based on philosophical and legal theories of equality and distributive justice as well as equality-law jurisprudence. We find that fairML does not currently engage in the type of measurement, reporting, or analysis necessary to justify leveling down in practice. The types of decisions for which ML and AI are currently used, as well as inherent limitations on data collection and measurement, suggest leveling down is rarely justified in practice. We propose a first step toward substantive equality in fairML: “leveling up” systems by enforcing minimum acceptable harm thresholds, or “minimum rate constraints,” as fairness constraints at the design stage. We likewise propose an alternative harms-based framework to counter the oversimplified egalitarian framing currently dominant in the field and push future discussion more towards substantive equality opportunities and away from strict egalitarianism by default.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.