Abstract

Artificial intelligence (AI), when combined with statistical techniques such as predictive analytics, has been increasingly applied in high-stakes decision-making systems seeking to predict and/or classify the risk of clients experiencing negative outcomes while receiving services. One such system is child welfare, where the disproportionate involvement of marginalized and vulnerable children and families raises ethical concerns about building fair and equitable models. One central issue in this debate is the over-representation of risk factors in algorithmic inputs and outputs, as well as the concomitant over-reliance on predicting risk. Would models perform better across groups if variables represented risk and protective factors associated with outcomes of interest? In addition, would models be more equitable across groups if they predicted alternative service outcomes? Using a risk-and-resilience framework applied in the field of social work, and the child welfare system as an illustrative example, this article explores a strengths-based approach to predictive model building. We define risk and protective factors, and then identify and illustrate how protective factors perform in a model trained to predict an alternative outcome of child welfare service involvement: the unsubstantiation of an allegation of maltreatment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call