Abstract

Algorithmic tools are increasingly used in child protection decision-making. Fairness considerations of algorithmic tools usually focus on statistical fairness, but there are broader justice implications relating to the data used to construct source databases, and how algorithms are incorporated into complex sociotechnical decision-making contexts. This article explores how data that inform child protection algorithms are produced and relates this production to both traditional notions of statistical fairness and broader justice concepts. Predictive tools have a number of challenging problems in the child protection context, as the data that predictive tools draw on do not represent child abuse incidence across the population and child abuse itself is difficult to define, making key decisions that become data variable and subjective. Algorithms using these data have distorted feedback loops and can contain inequalities and biases. The challenge to justice concepts is that individual and group rights to non-discrimination become threatened as the algorithm itself becomes skewed, leading to inaccurate risk predictions drawing on spurious correlations. The right to be treated as an individual is threatened when statistical risk is based on a group categorisation, and the rights of families to understand and participate in the decisions made about them is difficult when they have not consented to data linkage, and the function of the algorithm is obscured by its complexity. The use of uninterpretable algorithmic tools may create ‘moral crumple zones’, where practitioners are held responsible for decisions even when they are partially determined by an algorithm. Many of these criticisms can also be levelled at human decision makers in the child protection system, but the reification of these processes within algorithms render their articulation even more difficult, and can diminish other important relational and ethical aims of social work practice.

Highlights

  • This article takes a critical perspective on the debates occurring in many nations in relation to the use of algorithms to assist with risk judgements in child protection contexts

  • Evaluating algorithmic tools in child protection by combining technical conceptualisations of fairness with social justice perspectives leads to a number of troubling conclusions

  • Without a database that reflects incidence, the racial and class disproportionalities within child protection system contact are likely to reproduce inequities that relate as much to surveillance biases as they do to differences in true incidence

Read more

Summary

Introduction

This article takes a critical perspective on the debates occurring in many nations in relation to the use of algorithms to assist with risk judgements in child protection contexts. Some scholars suggest this framework does not go far enough: that justice and rights are more effective concepts to analyse predictive tools, as they go beyond technical solutions, to consider broader social justice consequences (Gurses et al 2019; Naranayan 2018). These debates should be of much interest to social work, given the professional commitment to social justice ideals in social work as a discipline, and the sharp uptake of predictive tools in child protection contexts where many social workers practice. Implications for transparency and implementation within the special context of the child protection system are discussed

Setting the Scene
Predictive Tool Development
Statistical Fairness and Social Justice
Statistical Fairness and the Sample Frame
The Social Production of Data and the Feedback Loop
Consistently Biased?
Improving the Feedback Loop or Reducing Justice?
10. Implications for Practice
12. Considering the Counter-Argument
Findings
13. Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call