Abstract

Child protective service caseworkers need validated instruments to assist them in assessing safety and risk factors for child maltreatment. The literature provides growing evidence that actuarial risk assessments can be valid tools for classifying families according to risk of future maltreatment. However, compared to validity, we know less about the reliability of both whole assessments and individual items. In this study we tested interrater reliability for 108 individual risk and safety items. We used 31 realistic case vignettes for testing. Each item was completed six times for each vignette. Fifty-four caseworkers and supervisors participated in rating, generating a total of 20,088 ratings for analysis. To determine item reliability, we used measures of prevalence and percentage agreement and the Fleiss's kappa statistic. Results show that interrater reliability varies widely from item to item. Items with higher prevalence and items documenting demographics, current CPS system involvement, substance abuse, or mental health issues tend to be most reliable. We provide an overview of the testing process, which is replicable in other contexts. We also discuss implications for child protective services practice and for developing or revising risk and safety assessment instruments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call