Abstract

As Artificial Intelligence (AI) systems become increasingly embedded in our daily lives, it is of utmost importance to ensure that they are both fair and reliable. Regrettably, this is not always the case for predictive policing systems, as evidence shows biases based on age, race, and sex, leading to wrongful identifications of individuals as potential criminals. Given the existing criticism of the system’s unjust treatment of minority groups, it becomes essential to address and mitigate this concerning trend. This study delved into the infusion of domain knowledge in the predictive policing system, aiming to minimize prevailing fairness issues. The experimental results indicate a considerable increase in fairness across all metrics for all protected classes, thus fostering greater trust in the predictive policing system by reducing the unfair treatment of individuals.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call