Abstract

ABSTRACT With the recent rapid developments in artificial intelligence (AI), social scientists and computational scientists have approached overlapping questions about ethics, responsibility, and fairness. Joined-up efforts between these disciplines have nonetheless been scarce due to, among other factors, unfavourable institutional arrangements, unclear publication avenues, and sometimes incompatible normative, epistemological and methodological commitments. In this paper, we offer collaborative ethnography as one concrete methodology to address some of these challenges. We report on an interdisciplinary collaboration between science and technology studies scholars and data scientists developing an AI system to detect online misinformation. The study combined description, interpretation, and (self-)critique throughout the design and development of the AI system. We draw three methodological lessons to move from critique to action for interdisciplinary teams pursuing responsible AI innovation: (1) Collective self-critique as a tool to resist techno-centrism and relativism, (2) Moving from strategic vagueness to co-production, and (3) Using co-authorship as a method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call