Abstract

Machine Learning (ML) models are trained using historical data that may contain stereotypes of the society (biases). These biases will be inherently learned by the ML models which might eventually result in discrimination against certain subjects, for instance, people with certain protected characteristics (race, gender, age, religion, etc.). Since the decision provided by ML models might affect people's lives, fairness of these models becomes crucially important. When training a model with fairness constraints, a significant loss in accuracy relative to the unconstrained model may be unavoidable. Reducing the trade-off between fairness and accuracy is an active research question within the fair ML community, i.e., to provide models with high accuracy with as little bias as possible. In this paper, we extensively investigate the fairness metrics over different ML models and study the impact of ensemble models on fairness. To this end, we compare different ensemble strategies and empirically show which strategy is preferable for different fairness metrics. Furthermore, we also propose a novel weighting technique that allows a balance between fairness and accuracy. In essence, we assign weights such that they are proportional to classifiers' performance in term of fairness and accuracy. Our experimental results show that our weighting technique reduces the trade-off between fairness and accuracy in ensemble models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call