Abstract

With frequent reports of biased outcomes of AI systems, fairness rightfully becomes an active area of current ML research. However, while progress has been made on theoretical analysis and formulation of fairness as constraints on error probabilities, our ability to design and train modern deep learning models that reach the targeted fairness goals in practice is still limited. In this work, we focus on an interesting yet common fairness setting, where multiple samples are collected from each individual, and the goal is to maximally reduce performance disparity among individuals while maintaining overall model performance. To obtain such fair deep learning models, we use mode connectivity combined with multiobjective optimization to select the best model out of an identified feasible set of model weight configurations with similar overall performance but different distributions of performance over individuals. Our method is model-agnostic and effectively bridges fairness theory and practice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call