Abstract

Over the last decade, due to the growing availability of data and computational resources, machine learning (ML) approaches have started to play a key role in the implementation of affirmative-action policies and programs. The underlying assumption is that resource allocation can be informed by the prediction of individual risks, improving the prioritization of the potential beneficiaries, and increasing the performance of the system. Therefore, it is important to ensure that biases in the data or the algorithms do not lead to treating some individuals unfavourably. In particular, the notion of group-based fairness seeks to ensure that individuals will not be discriminated against on the basis of their group’s protected characteristics. This work proposes an optimization model to improve fairness in ML-enabled affirmative actions, following a post-processing approach. Our case study is an outreach program to increase cervical cancer screening among hard-to-reach women in Bogotá, Colombia. Bias may occur since the protected group (women in the most severe poverty) are under-represented in the data. Computational experiments show that it is possible to address ML bias while maintaining high levels of accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call