Abstract

Artificial intelligence is increasingly permeating many types of high-stake societal decision-making such as the work at the criminal courts. Various types of algorithmic tools have already been introduced into sentencing. This article concerns the use of algorithms designed to deliver sentence recommendations. More precisely, it is considered how one should determine whether one type of sentencing algorithm (e.g., a model based on machine learning) would be ethically preferable to another type of sentencing algorithm (e.g., a model based on old-fashioned programming). Whether the implementation of sentencing algorithms is ethically desirable obviously depends upon various questions. For instance, some of the traditional issues that have received considerable attention are algorithmic biases and lack of transparency. However, the purpose of this article is to direct attention to a further challenge that has not yet been considered in the discussion of sentencing algorithms. That is, even if is assumed that the traditional challenges concerning biases, transparency, and cost-efficiency have all been solved or proven insubstantial, there will be a further serious challenge associated with the comparison of sentencing algorithms; namely, that we do not yet possess an ethically plausible and applicable criterion for assessing how well sentencing algorithms are performing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call