Abstract

There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented within narrow and targeted fairness toolkits for algorithm assessments that are difficult to integrate into an algorithm’s broader ethical assessment. In this paper, we derive lessons from ethical philosophy and welfare economics as they relate to the contextual factors relevant for fairness. In particular we highlight the debate around the acceptability of particular inequalities and the inextricable links between fairness, welfare and autonomy. We propose Key Ethics Indicators (KEIs) as a way towards providing a more holistic understanding of whether or not an algorithm is aligned to the decision-maker’s ethical values.

Highlights

  • Algorithms are increasingly used to inform critical decisions across high-impact domains, from credit risk evaluation to hiring to criminal justice

  • One of our contributions is to derive lessons from ethical philosophy and from welfare economics on what are the contextual considerations that are important in assessing an algorithm’s ethics beyond what can be captured in a mathematical formula

  • We refer to the debate in ethical philosophy on what constitutes acceptable vs. unacceptable inequalities

Read more

Summary

Introduction

Algorithms are increasingly used to inform critical decisions across high-impact domains, from credit risk evaluation to hiring to criminal justice. The fairness toolkit landscape so far reflects the reductionist understanding of fairness as mathematical conditions, as the implementations rely on narrowly defined fairness metrics to provide “pass/fail” reports. These toolkits can sometimes give practitioners conflicting information about an algorithm’s fairness, which is unsurprising given that it is mathematically impossible to meet some of the fairness conditions simultaneously [37]. This is reflective of the conflicting visions of fairness espoused by each mathematical definition and the underlying ethical assumptions [6]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.