Context:There are growing concerns about algorithmic fairness, as some machine learning (ML)-based algorithms have been found to exhibit biases against protected attributes such as gender, race, age and so on. Individual fairness requires an ML classifier to produce similar outputs for similar individuals. Verification Based Testing (Vbt) is a state-of-the-art black-box testing algorithm for individual fairness that leverages constraint solving to generate test cases. Objective:Generating diverse test cases is expected to facilitate efficient detection of diverse discriminatory data instances (i.e., cases that violate individual fairness). Hashing-based sampling techniques draw a sample approximately uniformly at random from the set of solutions of given Boolean constraints. We propose Vbt-X, which improves Vbt with hashing-based sampling, aiming to improve its testing performance. Method:We realize hashing-based sampling for Vbt. The challenge is that the off-the-shelf hashing-based sampling techniques cannot be integrated in a straightforward manner because the constraints in Vbt are generally not Boolean. Moreover, we propose several enhancement techniques to make Vbt-X more efficient. Results:To evaluate our method, we conduct experiments, where Vbt-X is compared to Vbt, Sg and ExpGA (other well-known fairness testing algorithms) over a set of configurations consisting of several datasets, protected attributes, and ML classifiers. The results show that, with each configuration, Vbt-X detects more discriminatory data instances with higher diversity than Vbt and Sg. Vbt-X detects discriminatory data instances with higher diversity than ExpGA, though the number of discriminatory data instances detected by Vbt-X is lesser than ExpGA. Conclusion:Our proposed method performs better than other state-of-the-art black-box fairness testing algorithms, particularly in terms of diversity. Our method can serve to efficiently identify flaws in ML classifiers with respect to individual fairness for subsequent improvements of an ML classifier. On the other hand, although our method is specific to individual fairness, it could work for testing other aspects of a software system such as security and counterfactual explanations with some technical adaptations, which remains for future work.
Read full abstract