Abstract
Operationalizing AI ethics and safety principles and frameworks is essential to realizing the potential benefits and mitigating potential harms caused by AI systems. To that end, actors across industry, academia, and regulatory bodies have created formal taxonomies of harm to support operationalization efforts. These include novel “holistic” methods that go beyond exclusive reliance on technical benchmarking. However, our paper argues that such taxonomies are still too general to be readily implemented in sector-specific AI safety operationalization efforts, and especially in underresourced or “high-risk” sectors. This is because many sectors are constituted by discourses, norms, and values that “refract” or even directly conflict with those operating in society more broadly. Drawing from emerging anthropological theories of human rights, we propose that the process of “vernacularization”—a participatory, decolonial practice distinct from doctrinary “translation” (the dominant mode of AI safety operationalization)—can help bridge this gap. To demonstrate this point, we consider the education sector, and identify precisely how vernacularizing a leading taxonomy of harm leads to a clearer view of how harms AI systems may cause are substantially intensified when deployed in educational spaces. We conclude by discussing the generalizability of vernacularization as a useful AI safety methodology.
Submitted Version (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have