When making decisions in high-pressure situations, police officers experience cognitive demands and often lack access to data about the people with whom they are interacting. Artificial intelligence (AI) tools that provide such data can potentially improve officers’ ability to respond effectively to calls and thus bolster public safety. However, research in diverse social sciences has documented persistent biases in AI-assisted work. We propose a framework for understanding how bias can creep into AI-assisted police work and how to intervene. In a cycle of bias, AI tools provide biased information to officers, which in turn promotes biased responses during interactions with the public, ultimately resulting in biased incident reports that amplify the original biases in the AI systems. Our proposed interventions focus on training and nudges that increase officers’ use of deliberative processing, empathic mindsets, and perspective-getting techniques and encourage the writing of detailed, debiased incident reports. We recommend taking a cognitive view of policing and drawing on insights from behavioral science research to maximize the benefits of AI tools while minimizing the risk that they will amplify biases.
Read full abstract