Abstract
This paper analyzes where Artificial Intelligence (AI) Ethics research fails and breaks down the dangers of well-intentioned, but ultimately performative ethics research. A large majority of AI ethics research is critiqued for lacking a comprehensive analysis of how AI is interconnected with sociological systems of oppression and power. Our work contributes to the handful of research that presents intersectional, Western systems of oppression and power as a framework for examining AI ethics work and the complexities of building less harmful technology; directly connecting technology to named systems such as capitalism and classism, colonialism, racism and white supremacy, patriarchy, and ableism. We then explore current AI ethics rhetoric's effect on the AI ethics domain and AI regulation. In conclusion, we provide an applied example to contextualize intersectional systems of oppression and AI interventions in the U.S. justice system and present actionable steps for AI practitioners to participate in a less performative, critical analysis of AI.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.