Abstract

The vast majority of machine learning research focuses on improving the correctness of the outcomes (i.e., accuracy, error-rate, and other metrics). However, the negative impact of machine learning outcomes can be substantial if the consequences marginalize certain groups of data, especially if certain groups of people are the ones being discriminated against. Thus, recent papers try to tackle the unfair treatment of certain groups of data (humans), but mostly focus on only one sensitive feature with binary values. In this paper, we propose an ensemble boosting FairBoost that takes into consideration fairness as well as accuracy to mitigate unfairness in classification tasks during the model training process. This method tries to close the gap between proposed approaches and real-world applications, where there is often more than one sensitive feature that contains multiple categories. The proposed approach checks the bias and corrects it through the iteration of building the boosted ensemble. The proposed FairBoost is tested within the experimental setting and compared to similar existing algorithms. The results on different datasets and settings show no significant changes in the overall quality of classification, while the fairness of the outcomes is vastly improved.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.