Abstract

Machine Learning (ML) models could exhibit biased behavior, or algorithmic discrimination, resulting in unfair or discriminatory outcomes. The bias in the ML model could emanate from various factors such as the training dataset, the choice of the ML algorithm, or the hyperparameters used to train the ML model. In addition to evaluating the model’s correctness, it is essential to test ML models for fair and unbiased behavior. In this paper, we present a combinatorial testing-based approach to perform fairness testing of ML models. Our approach is model agnostic and evaluates fairness violations of a pre-trained ML model in a two-step process. In the first step, we create an input parameter model from the training data set and then use the model to generate a t-way test set. In the second step, for each test, we modify the value of one or more protected attributes to see if we could find fairness violations. We performed an experimental evaluation of the proposed approach using ML models trained with tabular datasets. The results suggest that the proposed approach can successfully identify fairness violations in pre-trained ML models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call