Abstract
Context:There are growing concerns about algorithmic fairness, as some machine learning (ML)-based algorithms have been found to exhibit biases against protected attributes such as gender, race, age and so on. Individual fairness requires an ML classifier to produce similar outputs for similar individuals. Verification Based Testing (Vbt) is a state-of-the-art black-box testing algorithm for individual fairness that leverages constraint solving to generate test cases. Objective:Generating diverse test cases is expected to facilitate efficient detection of diverse discriminatory data instances (i.e., cases that violate individual fairness). Hashing-based sampling techniques draw a sample approximately uniformly at random from the set of solutions of given Boolean constraints. We propose Vbt-X, which improves Vbt with hashing-based sampling, aiming to improve its testing performance. Method:We realize hashing-based sampling for Vbt. The challenge is that the off-the-shelf hashing-based sampling techniques cannot be integrated in a straightforward manner because the constraints in Vbt are generally not Boolean. Moreover, we propose several enhancement techniques to make Vbt-X more efficient. Results:To evaluate our method, we conduct experiments, where Vbt-X is compared to Vbt, Sg and ExpGA (other well-known fairness testing algorithms) over a set of configurations consisting of several datasets, protected attributes, and ML classifiers. The results show that, with each configuration, Vbt-X detects more discriminatory data instances with higher diversity than Vbt and Sg. Vbt-X detects discriminatory data instances with higher diversity than ExpGA, though the number of discriminatory data instances detected by Vbt-X is lesser than ExpGA. Conclusion:Our proposed method performs better than other state-of-the-art black-box fairness testing algorithms, particularly in terms of diversity. Our method can serve to efficiently identify flaws in ML classifiers with respect to individual fairness for subsequent improvements of an ML classifier. On the other hand, although our method is specific to individual fairness, it could work for testing other aspects of a software system such as security and counterfactual explanations with some technical adaptations, which remains for future work.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.