Abstract
The use of algorithmic decision-making systems based on machine learning models has led to a need for fair (unbiased) and explainable classification outcomes. In particular, machine learning algorithms can encode biases, which might result in discriminatory decisions for certain groups such as gender, race, or age. Although a number of works on decision tree learning have been proposed to decrease the chance of discrimination, they usually focus on the use of a single fairness metric. In general, creating a model based on a single fairness metric is not a sufficient way to mitigate discrimination since bias can originate from various sources—e.g., the data itself or the optimization process. In this paper, we propose a novel decision tree learning process that utilizes multiple fairness metrics to address both group and individual discrimination. This is achieved by extending the attribute selection procedure to consider not only information gain but also gain in fairness. Computational experiments on fourteen different datasets with various sensitive features demonstrate that the proposed Fair-C4.5 models improve fairness without a loss in predictive accuracy when compared to the well-known C4.5 and the fairness-aware FFTree algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.