Abstract
AbstractIn the era of data-driven decision-making, ensuring fairness and equality in machine learning models has become increasingly crucial. Multiple fairness definitions have been brought forward to evaluate and mitigate unintended fairness-related harms in real-world applications, with little research on addressing their interactions with each other. This paper explores the application of a Minimax Pareto-optimized solution to optimize individual and group fairness at individual and group levels on the Adult Census Income dataset as well as on the German Credit dataset. The objective of training a classification model with a multi-objective loss function is to achieve fair outcomes without compromising utility objectives. We investigate the interplay of different fairness definitions, including definitions of performance consistency and traditional group and individual fairness measures, amongst each other coupled with performance. The results presented in this paper highlight the feasibility of incorporating several fairness considerations into machine learning models, which can be applied to use cases with multiple sensitive features and attributes that characterize real-world applications. This research is a valuable step toward building responsible and transparent machine learning systems that can be incorporated into critical decision-making processes.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.