Abstract This editor's introduction welcomes readers to a new interdisciplinary undertaking. The community of practice Critical AI addresses hopes to bring critical thinking of the kind that interpretive disciplines foster into dialogue with work by technologists and others who share the understanding of interdisciplinary research as a powerful tool for building accountable technology in the public interest. Critical AI studies aims to shape and activate conversations in academia, industry, policymaking, media, and the public at large. The long and ongoing history of “AI,” including the data-driven technologies that now claim that name, remains riddled by three core dilemmas: (1) reductive and controversial meanings of “intelligence”; (2) problematic benchmarks and tests for supposedly scientific terms such as “AGI”; and (3) bias, errors, stereotypes, and concentration of power. AI hype today is steeped in blends of utopian and dystopian discourse that distract from the real-world harms of existing technologies. In reality, what is hyped and anthropomorphized as “AI” and even “AGI” is the product not only of technology companies and investors but also—and more fundamentally—of the many millions of people and communities subject to copyright infringement, nonconsensual use of data, bias, environmental harms, and the low-wage and high-stress modes of “human in the loop” through which systems for probabilistic mimicry improve their performance in an imitation game.
Read full abstract