Abstract

Artificial intelligence is quietly shaping social structures and private lives. Although it promises parity and efficiency, its computational processes mirror biases of existing power even as often-proprietary data practices and cultural perceptions of computational magic obscure those influences. However, intersectionality—which foregrounds an analysis of institutional power and incorporates queer, feminist, and critical race theories—can help to rethink artificial intelligence. An intersectional framework can be used to analyze the biases and problems built into existing artificial intelligence, as well as to uncover alternative ethics from its counter-histories. This paper calls for the application of intersectional strategies to artificial intelligence at every level, from data to design to implementation, from technologist to user. Drawing on intersectional theories, the research argues these strategies are polyvocal, multimodal, and experimental—suggesting that community-focused and artistic practices can help imagine AI’s intersectional possibilities and help begin to address its biases.

Highlights

  • Artificial Intelligence (AI) should be created, critiqued, and reframed with an awareness of power—valuing multiple perspectives and methodologies—in order to address the social inequalities, it reinforces

  • After briefly contextualizing current arguments about AI bias, and exploring how intersectionality analyzes institutional power not individual identity alone, this paper will offer examples of intersectional strategies from Black feminist, mixed race, and queer communities that might be applied to algorithm design and implementation

  • The examples are widespread and troubling, like the ProPublica study of AI criminal risk assessment that found it terrifyingly inaccurate: “The score proved remarkably unreliable in forecasting violent crime: Only 20 percent [...] went on to [commit them]. [...] The formula was likely to falsely flag black defendants as future criminals, [...]

Read more

Summary

Introduction

An intersectional framework can be used to analyze existing AI and to uncover alternative possibilities from its counter-histories, as well as to help shift approaches to designing new AI. After briefly contextualizing current arguments about AI bias, and exploring how intersectionality analyzes institutional power not individual identity alone, this paper will offer examples of intersectional strategies from Black feminist, mixed race, and queer communities that might be applied to algorithm design and implementation.

Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.