Abstract

A second issue is one of uncertainty in the data when trying to apply it. Models used in sentencing systems are designed to create risk scores that are then used, in effect, as probabilities of reoffending. For a group, the two might align reasonably well. But for individuals the data may be less effective. In their report for RUSI, Oswald and colleagues argued data that seems accurate at the group level can often conceal very low accuracy rates for individuals. Allowing the data to lead to conclusions based on statistical correlations certainly can be risky, Oswald told E&T. Do such conclusions make sense in the operational environment in which they are to be deployed? Is it legal or fair to use certain types of input data to draw conclusions about individuals? Policing data represents a limited picture of the past and caution needs to be exercised before using such datasets to make predictions about individuals. Private companies and criminal-justice agencies may try to resist attempts for greater transparency in the use of Al. But, as a number of criminology and technology researchers point out, this should not get in the way of determining how Al is used in policing society. (2 pages)

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.