Abstract

It is fair to say that many of the prominent examples of bias in Machine Learning (ML) arise from bias in the training data. In fact, some would argue that supervised ML algorithms cannot be biased, they reflect the data on which they are trained. In this paper, we demonstrate how ML algorithms can misrepresent the training data through underestimation. We show how irreducible error, regularization, and feature and class imbalance can contribute to this underestimation. The paper concludes with a demonstration of how the careful management of synthetic counterfactuals can ameliorate the impact of this underestimation bias.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call