Abstract

Machine learning algorithms have become increasingly common and have affect many aspects of our life. However, because the objective of most of the standard, off-the-shelf machine learning algorithms is to maximize the prediction performance, the results produced by these algorithms could be discriminatory. The discrimination issue has gain the interest from both academic researchers and practitioners to develop machine learning algorithms that are fair. Even then, most such algorithms focus on decreasing the disparity in predictions of successful outcomes. However, these algorithms tend to ignore the strategic behavior of prediction subpopulations, resulting in disparity in the behavior of prediction subjects at equilibrium. One exception is those algorithms that use equalized odds as a fairness criterion which can decrease disparity in behavior. However, they cannot be used in many practical settings. We propose a new class of fair machine learning algorithms that alleviate disparity in prediction results, disparity in behavior of prediction subjects, and does not need to account for the sensitive variable explicitly. Our algorithm also complies with the notion of equal treatment and explainable AI, and can be applied to a wide variety of prediction tasks. We demonstrate the theoretical performance of our algorithm in the asymptotic scenario. In addition, we show the practical performance of the proposed algorithm by comparing its performance with that of other ordinary off-the-shelf algorithms and that of existing fair machine learning algorithms available in the IBM Fairness 360 suite.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call