Abstract

The problem of guaranteeing the fairness of automatic decision making systems has become a topic of considerable interest. Many competing definitions of fairness have been proposed, as well as methods aiming to achieve or approximate them while maintaining the ability to train useful models. The complimentary question of testing the fairness of an existing predictor is important both to the creators of machine learning systems, and to users. More specifically, it is important for users to be able to prove that an unfair system that affects them is indeed unfair, even when full and direct access to the system internals is denied. In this paper, we propose a framework that enables us to prove the unfairness of predictors which have known accuracy properties, without direct access to the model, the features it is based on, or even individual predictions. To do so, we analyze the fairness-accuracy trade-off under the definition of demographic parity. We develop an information-theoretic method that uses only an external dataset containing the protected attributes and the targets and provides a bound on the accuracy of any fair model that predicts the same targets, regardless of the features it is based on. The result is an algorithm that enables proof of unfairness, with absolutely no cooperation from the system owners.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.