Abstract

Whereas using artificial intelligence (AI) to predict natural hazards is promising, applying a predictive policing algorithm (PPA) to predict human threats to others continues to be debated. Whereas PPAs were reported to be initially successful in Germany and Japan, the killing of Black Americans by police in the US has sparked a call to dismantle AI in law enforcement. However, although PPAs may statistically associate suspects with economically disadvantaged classes and ethnic minorities, the targeted groups they aim to protect are often vulnerable populations as well (e.g., victims of human trafficking, kidnapping, domestic violence, or drug abuse). Thus, determining how to enhance the benefits of PPA while reducing bias through better management is important. In this paper, we propose a policy schema to address this issue. First, after clarifying relevant concepts, we examine major criticisms of PPAs and argue that some of them should be addressed. If banning AI or making it taboo is an unrealistic solution, we must learn from our errors to improve AI. We next identify additional challenges of PPAs and offer recommendations from a policy viewpoint. We conclude that the employment of PPAs should be merged into broader governance of the social safety net and audited publicly by parliament and civic society so that the unjust social structure that breeds bias can be revised.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call