Traffic stops represent a crucial point of interaction between citizens and law enforcement, with potential implications for bias and discrimination. This study performs a rigorously validated comparative machine learning model analysis, creating artificial intelligence (AI) technologies to predict the results of traffic stops using a dataset sourced from the Montgomery County Maryland Data Centre, focusing on variables such as driver demographics, violation types, and stop outcomes. We repeated our rigorous validation of AI for the creation of models that predict outcomes with and without race and with and without gender informing the model. Feature selection employed regularly selects for gender and race as a predictor variable. We also observed correlations between model performance and both race and gender. While these findings imply the existence of discrimination based on race and gender, our large-scale analysis (>600,000 samples) demonstrates the ability to produce top performing models that are gender and race agnostic, implying the potential to create technology that can help mitigate bias in traffic stops. The findings encourage the need for unbiased data and robust algorithms to address biases in law enforcement practices and enhance public trust in AI technologies deployed in this domain.
Read full abstract