Abstract

Technologies of “artificial intelligence” (AI) and machine learning (ML) are increasingly presented as solutions to key problems of our societies. Companies are developing, investing in, and deploying machine learning applications at scale in order to filter and organize content, mediate transactions, and make sense of massive sets of data. At the same time, social and legal expectations are ambiguous, and the technical challenges are substantial.This is the introductory article to a special theme that addresses this turn to AI as a technical, discursive and political phenomena. The opening article contextualizes this theme by unfolding this multi-layered nature of the turn to AI. It argues that, whereas public and economic discourses position the widespread deployment of AI and automation in the governance of digital communication as a technical turn with a narrative of revolutionary breakthrough-moments and of technological progress, this development is at least similarly dependent on a parallel discursive and political turn to AI. The article positions the current turn to AI in the longstanding motif of the “technological fix” in the relationship between technology and society, and identifies a discursive turn to responsibility in platform governance as a key driver for AI and automation. In addition, a political turn to more demanding liability rules for platforms further incentivizes platforms to automatically screen their content for possibly infringing or violating content, and position AI as a solution to complex social problems.

Highlights

  • When Facebook CEO Marc Zuckerberg was pressed in the 2018 Senate Hearing upon issues of misinformation, hate speech and privacy, he was eager to present a solution: “artificial intelligence” (AI) will fix this!” (Katzenbach, 2019)

  • Senators asked Zuckerberg about what had happened in previous years, and demanded to hear about the company’s plans to respond adequately and responsibly in the future to the challenges posed by disinformation campaigns, the spread of hate speech, terrorist propaganda and other problematic content

  • Zuckerberg repeatedly referred to the development and increasing use of AI-powered systems to detect hate speech, terrorism and misinformation: “In the future, we’re going to have tools that are going to be able to identify more types of bad content.”1 He reassured the senators that future systems will cope much better with the difficult contextual and nuanced classification of language: “Over a 5 to 10-year period, we will have A.I. tools that can get into some of the nuances – the linguistic nuances of different types of content to be more accurate in flagging things for our systems

Read more

Summary

Introduction

When Facebook CEO Marc Zuckerberg was pressed in the 2018 Senate Hearing upon issues of misinformation, hate speech and privacy, he was eager to present a solution: “AI will fix this!” (Katzenbach, 2019). This general AI discourse functions as a sounding board for the second discursive development relevant here, and that is the remarkable shift in platform governance discourse since 2015 −16: for a few years public and political actors have been increasingly demanding that platforms take responsibility for the content and communication dynamics on their services (responsibility turn).

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call