Abstract
People often apply gender stereotypes toward computerized agents. Rather than challenging these stereotypes, modern AI technologies controversially use them in creating AI agents that underline stereotypical gendered roles. This approach thus further reinforces the often male-dominated societal gender norms and disenfranchises women and gender non-binary individuals. While this issue has raised concerns in the HCI and CSCW communities, still little is known regarding how to mitigate the negative impacts of embedding gender stereotypes in AI agents. In this paper, we propose an eXplainable AI (XAI) approach to mitigating individuals' gender stereotypes toward AI agents. We conducted an online video vignette experiment with 350 participants randomly assigned to one of the eighteen conditions of a 3 (gender of the agent: woman, man, gender-neutral) x 3 (task gender: feminine, masculine, neutral) x 2 (presence or absence of AI explanation) between-subjects design. Our findings indeed suggest that XAI helped participants avoid applying gender stereotypes toward gendered AI agents, by increasing their understanding of how the agent came to its decision and decreasing their rating of the agent's humanlikeness. We contribute to CSCW research by providing a timely investigation into individuals' gender stereotypes toward the state-of-the-art AI agents and by advancing the empirical understanding of the cognitive processes and mechanisms underlying these gender stereotypes. We also demonstrate how eXplainable AI can effectively suppress the application of social characteristics (i.e., gender stereotypes) toward AI agents by disrupting the said cognitive processes. Insights from this study can inform how future AI technologies should be designed to create a progressive gender reality that will gradually reshape humans' experience and ingrained gender ideologies.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have