Abstract
Quantitative recidivism risk assessment can be used at the pretrial detention, trial, sentencing, and / or parole stage in the justice system. It has been criticized for what is measured, whether the predictions are more accurate than those made by humans, whether it creates or increases inequality and discrimination, and whether it compromises or violates other aspects of fairness. This criticism becomes even more topical with the arrival of the Artificial Intelligence (AI) Act. This article identifies and applies the relevant rules of the proposed AI Act in relation to quantitative recidivism risk assessment. It does so by focusing on the proposed rules for the quality of the data and the models used, on biases, and on the human oversight. It is concluded that legislation may consider requiring providers of high-risk AI systems to demonstrate that their solution performs significantly better than risk assessments based on simple models, and better than human assessment. Furthermore, there is no single answer to evaluate the performance of quantitative recidivism risk assessment tools that are or may be deployed in practice. Finally, three approaches of human oversight are discussed to correct for the negative effects of quantitative risk assessment: the optional, benchmark, and feedback approach.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.