Abstract

ABSTRACTCorrectional agencies use risk assessment instruments for a wide range of purposes, including to help classify, manage, and treat offenders. The literature on offender risk assessment largely focuses on assessing for predictive accuracy, and far less research examines reliability in scoring. This study adds to this gap in knowledge by assessing how reliably and accurately a group of trained raters score one particular risk assessment tool, the Level of Service/Case Management Inventory (LS/CMI). Findings reveal an adequate to strong level of inter-rater reliability across the domains of the LS/CMI. These results also suggest there is a wide range of rater accuracy across the items and domains of the LS/CMI. The policy and practical implications of these findings are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call