Abstract

Fair classification has become an important topic in machine learning research. While most bias mitigation strategies focus on neural networks, we noticed a lack of work on fair classifiers based on decision trees even though they have proven very efficient. In an up-to-date comparison of state-of-the-art classification algorithms in tabular data, tree boosting outperforms deep learning (Zhang et al. in Expert Syst Appl 82:128–150, 2017). For this reason, we have developed a novel approach of adversarial gradient tree boosting. The objective of the algorithm is to predict the output Y with gradient tree boosting while minimizing the ability of an adversarial neural network to predict the sensitive attribute S. The approach incorporates at each iteration the gradient of the neural network directly in the gradient tree boosting. We empirically assess our approach on four popular data sets and compare against state-of-the-art algorithms. The results show that our algorithm achieves a higher accuracy while obtaining the same level of fairness, as measured using a set of different common fairness definitions.

Highlights

  • Machine learning models are increasingly used in decision making processes

  • We propose a novel approach to combine the strength of gradient tree boosting with an adversarial fairness constraint

  • In order to establish the basis for our approach and to introduce our notation, we first summarize the principle of classical gradient tree boosting

Read more

Summary

Introduction

In many fields of application, they generally deliver superior performance compared with conventional, deterministic algorithms. Those models are mostly black boxes which are hard, if not impossible, to interpret. Many incidents of this nature have been For this reason, next to optimizing the performance of a machine learning model, the new challenge for data scientists is to determine whether the model output predictions are discriminatory, and how they can mitigate such unwanted bias as much as possible. We propose a novel approach to combine the strength of gradient tree boosting with an adversarial fairness constraint. To the best of our knowledge, we propose the first adversarial learning method for generic classifiers, including non-differentiable machines, such as decision trees;.

Definitions of Fairness
Demographic Parity
Equalized Odds
Related Work
Gradient Tree Boosting
Min–Max Formulation
Learning
Empirical Results
Synthetic Scenario
Data Sets
Fairness Algorithms
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.