ABSTRACTAdvances in computer science–specifically in the development and use of generative machine learning–have provided powerful new tools for psychologists to create synthetic human faces as stimuli, which ultimately provide high‐quality photorealistic face images that have many advantages, including reducing typical ethical and privacy concerns and generating face images from minoritized communities that are typically underrepresented in existing face databases. However, there are a number of ways that using machine learning‐based face generation and manipulation software can introduce bias into the research process, thus threatening the validity of studies. The present article provides a summary of how one class of recently popular algorithms for generating faces–generative adversarial networks (GANs)—works, how we control GANs, and where biases (with a particular focus on racial biases) emerge throughout these processes. We discuss recommendations for mitigating these biases, as well as how these concepts manifest in similar modern text‐to‐image algorithms.
Read full abstract