Abstract

How do some of the most advanced machine learning facial recognition algorithms make important decisions, such as whom to hire or who is considered a leader? Existing research suggests advances in machine learning methods can use facial features in an image—facial morphology—to accurately and objectively predict answers to such questions. We show, however, that even after implementing state-of-the-art models, decision-based facial recognition algorithms are not as objective as previously claimed. Unpacking the “black box” of an existing facial recognition algorithm revealed the algorithm did not rely on facial morphology to make decisions. Instead, when covariates such as attractiveness were accounted for, the algorithm relied mostly on “leftover” transient features, such as clothing, hairstyle, or background lighting for decision-making. We identify the specific stages—sampling, preprocessing, model implementation, and model functioning stage— in which algorithmic focus bias and interpretation bias are likely to arise in facial recognition algorithms. These results suggest that decision-based facial recognition algorithms are biased in ways that researchers have overlooked, with troubling implications for their use by governments, organizations, and researchers. We introduce the concept of “algorithmic face-ism,” in which (1) machine learning algorithms unfairly express an inherent preference for specific facial morphologies, and (2) researchers mistakenly attribute behavioral predictions to facial morphologies. This paper thus demonstrates how leading decision-based facial recognition systems are biased and how previously taken-for-granted factors contribute to this pattern of bias. We conclude by discussing how bias can be mitigated in such facial recognition algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call