Abstract

Discrimination has been shown in many machine learning applications, which calls for sufficient fairness testing before their deployment in ethic-relevant domains. One widely concerning type of discrimination, testing against group discrimination, mostly hidden , is much less studied, compared with identifying individual discrimination . In this work, we propose TestSGD , an interpretable testing approach that systematically identifies and measures hidden (which we call “subtle”) group discrimination of a neural network characterized by conditions over combinations of the sensitive attributes . Specifically, given a neural network, TestSGD first automatically generates an interpretable rule set that categorizes the input space into two groups. Alongside, TestSGD also provides an estimated group discrimination score based on sampling the input space to measure the degree of the identified subtle group discrimination, which is guaranteed to be accurate up to an error bound. We evaluate TestSGD on multiple neural network models trained on popular datasets including both structured data and text data. The experiment results show that TestSGD is effective and efficient in identifying and measuring such subtle group discrimination that has never been revealed before. Furthermore, we show that the testing results of TestSGD can be used to mitigate such discrimination through retraining with negligible accuracy drop.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.