Abstract

Due to growing concerns for the disproportionate dangers, artificial intelligence (AI) advances pose to marginalized groups, proposals for procedural solutions to ethics in AI abound. It is time to consider that some systems may be inherently unethical, even violent, whether or not they are fair. In this article, we deploy a feminist critical discourse analysis of long-format responses to ethical scenarios from computing science undergraduate students. We find that even among students that had a strong understanding of social justice and the power of AI to exacerbate existing inequities, most students contextualize these problems as the product of biased datasets and human mis/trust factors, rather than as problems of design and purpose. Further, while many students recognized racism and classism at play in the potential negative impacts of AI systems, most students failed to recognize ableism as a driving social force for inequity. As computing science faculty, we must recognize that our students graduate to become the researchers and developers of future technosocial systems. Pedagogically, we need more than procedural fixes to systemic inequities. We are not going to program our way into justice. We must learn to say no to building violent things.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.