Abstract

There are various arguments in favour of tempering algorithmic decision-making with human judgement. One common family of arguments appeal to concepts and criteria derived from legal philosophy about the nature of law and legal reasoning, and argue that algorithmic systems cannot satisfy them (but humans can). This paper argues that among the latter family of arguments, there is often an implicit appeal to the notion that each case needs to be assessed on its own merits, without comparison to or generalisation from previous cases. This notion of ‘individual justice’ has featured in jurisprudential debates about the granularity of rules and tests, and the (in)justice of discrimination, but has not yet been explicitly imported into debates about justice and algorithmic decision-making. This paper has several aims. The first is to provide an overview of the concept of individual justice and distinguish it from related but distinct arguments about the value of human discretion. Equipped with this account of human judgement as a guarantor of individual justice, the second aim is to consider its place within and beside algorithmic decision-making. It argues that in so far as individual justice is valuable, it can only be meaningfully served through human judgement, because it antithetical to the kind of pre-determined reasoning that characterises algorithmic systems. This suggests that – to the extent that individual justice is deemed important – a requirement for human intervention and oversight over algorithmic decisions is necessary. The third aim is to consider how individual justice relates to other dimensions of justice, namely consistency and fairness or non-discrimination. Finally, the article discusses two challenges that are raised by this account. The first challenge concerns how individual justice can be accommodated alongside other dimensions of justice in the socio-technical contexts in which humans-in-the-loop are situated. The second concerns the potential inequities in individual justice that might result from an uneven application of human judgement in algorithmic settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call