Abstract

Data-driven AI systems can lead to discrimination on the basis of protected attributes like gender or race. One cause for this is the encoded societal biases in the training data (e.g., under-representation of females in the tech workforce), which is aggravated in the presence of unbalanced class distributions (e.g., when “hired” is the minority class in a hiring application). State-of-the-art fairness-aware machine learning approaches focus on preserving the overall classification accuracy while mitigating discrimination. In the presence of class-imbalance, such methods may further aggravate the problem of discrimination by denying an already underrepresented group (e.g., females) the fundamental rights of equal social privileges (e.g., equal access to employment). To this end, we propose AdaFair, a fairness-aware boosting ensemble that changes the data distribution at each round, taking into account not only the class errors but also the fairness-related performance of the model defined cumulatively based on the partial ensemble. Except for the in-training boosting of the group discriminated over each round, AdaFair directly tackles imbalance during the post-training phase by optimizing the number of ensemble learners for balanced error performance. AdaFair can facilitate different parity-based fairness notions and mitigate effectively discriminatory outcomes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.