Abstract

The use of machine learning allows automating decision-making based on data, saving time and resources compared to traditional methods that require human intervention. This automation poses significant challenges in terms of ensuring that the model-building process incorporates ethical considerations and addresses potential biases that may arise. In the context of this research, proposed approaches were explored to ensure fairness through interventions during data preprocessing in the construction of a binary classification model. The use case employed aimed to develop a model capable of determining whether demobilized individuals from armed groups in Colombia, who are in the process of reintegration, were eligible to access the Economic Insertion Benefit. Fairness was evaluated by the difference in false negative rates between men and women. To achieve a balance between model performance and non-discrimination, techniques such as feature engineering, hyperparameter optimization, balancing or resampling, suppression or unawareness, and reweighing were included. These techniques were used both independently and in combination with each other. The results highlighted the need to complement balancing or resampling techniques that do not consider fairness. On the other hand, applying balancing or resampling techniques with a fairness focus reduced the difference in false negative rates but resulted in a higher number of errors. In addition, the application of hyperparameter optimization and reweighing improved fairness without compromising the overall model accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call