Abstract

Recently, amid growing awareness that computer algorithms are not neutral tools but can cause harm by reproducing and amplifying bias, attempts to detect and prevent such biases have intensified. An approach that has received considerable attention in this regard is the Value Sensitive Design (VSD) methodology, which aims to contribute to both the critical analysis of (dis)values in existing technologies and the construction of novel technologies that account for specific desired values. This article provides a brief overview of the key features of the Value Sensitive Design approach, examines its contributions to understanding and addressing issues around bias in computer systems, outlines the current debates on algorithmic bias and fairness in machine learning, and discusses how such debates could profit from VSD-derived insights and recommendations. Relating these debates on values in design and algorithmic bias to research on cognitive biases, we conclude by stressing our collective duty to not only detect and counter biases in software systems, but to also address and remedy their societal origins.

Highlights

  • When, in 2016, investigative journalists at ProPublica published a report indicating that a software system used in US courts was racially biased, a lively debate ensued

  • The journalists had found that COMPAS, a decision support tool used by judges and parole officers to assess a defendant's likelihood to re-offend, was systematically overestimating the recidivism risk of black defendants while underestimating that of white defendants

  • This article provides a brief overview of the key features of Value Sensitive Design (Section 2), examines its contributions to understanding and addressing issues around bias in computer systems (Section 3), outlines the current debates on algorithmic bias and fairness in machine learning (Section 4), and discusses how such debates could profit from VSD-derived insights and recommendations (Section 5)

Read more

Summary

Introduction

In 2016, investigative journalists at ProPublica published a report indicating that a software system used in US courts was racially biased, a lively debate ensued. This article provides a brief overview of the key features of Value Sensitive Design (Section 2), examines its contributions to understanding and addressing issues around bias in computer systems (Section 3), outlines the current debates on algorithmic bias and fairness in machine learning (Section 4), and discusses how such debates could profit from VSD-derived insights and recommendations (Section 5) Relating these debates on values in design and algorithmic bias to research on cognitive biases, we conclude by stressing our collective duty to detect and counter biases in software systems, but to address and remedy their societal origins (Section 6)

Value Sensitive Design: a brief overview
Bias in computer systems
Algorithmic bias and fairness in machine learning
Discussion
Concluding remarks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call