Abstract

AbstractIn the mid-2023, promoters of artificial intelligence (AI) as an “existential risk” coined a new term, “frontier AI,” that refers to “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.” Promoters of this new term were able to disseminate it via the United Kingdom (UK) government’s Frontier AI Taskforce (formerly the Foundation Models Taskforce) as well as the UK’s AI Safety Summit, held in November 2023.I argue that adoption of the term “frontier AI” is harmful and contributes to AI hype. Promoting this new term is a way for its boosters to focus the public conversation around the AI-related risks they think are most important, namely “existential risk”—a scenario in which AI is able to bring about the destruction of humanity. Simultaneously, “frontier AI” is a re-branding exercise for the large-scale generative machine learning (ML) models that have been shown to cause severe and pervasive harms (including psychological, social, and environmental harms). Unlike “existential risk,” these harms are actual rather than theoretical, whereas the term “frontier AI” moves our collective focus away from actual harms to focus on hypothetical doomsday scenarios.Moreover, “frontier AI” as a term invokes the colonial mindset, further reinscribing the harmful dynamics between the handful of powerful Western companies who produce today’s generative AI models and the people of the “Global South” who are most likely to experience harm as a direct result of the development and deployment of these AI technologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call