Abstract
ABSTRACTAdvances in computer science–specifically in the development and use of generative machine learning–have provided powerful new tools for psychologists to create synthetic human faces as stimuli, which ultimately provide high‐quality photorealistic face images that have many advantages, including reducing typical ethical and privacy concerns and generating face images from minoritized communities that are typically underrepresented in existing face databases. However, there are a number of ways that using machine learning‐based face generation and manipulation software can introduce bias into the research process, thus threatening the validity of studies. The present article provides a summary of how one class of recently popular algorithms for generating faces–generative adversarial networks (GANs)—works, how we control GANs, and where biases (with a particular focus on racial biases) emerge throughout these processes. We discuss recommendations for mitigating these biases, as well as how these concepts manifest in similar modern text‐to‐image algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.