Abstract

ABSTRACT Rapid progress in Artificial Intelligence (AI) over the past few years has unfolded important questions—and triggered anxiety—about the potential (mis)use of AI-generated content in information warfare and disinformation campaigns. A crucial question remains unanswered, however: whether—and to what extent—this technology can be harnessed in a credible way by extremists, whose language is usually characterized by highly specific subcultural registers that make them unlikely to be easily mimicked by AI models. This paper answers that question, offering the first rigorous evaluation of the credibility of synthetic (AI-generated) extremist prose. Using an expert survey experiment measuring the credibility of fake incel forum posts and ISIS magazines paragraphs generated through an original workflow for the production of synthetic extremist text, we show that these texts have high credibility scores, confusing world-leading experts. These findings, which are discussed under the broader light of the emerging literature on nefarious “dual-use” of synthetic content, not only define and evaluate the threat of extremists harnessing language models for propaganda, but they also more broadly strengthen our understanding of the role language models are due to play in hostile political communication, and ultimately resurface the debate on the human costs of technological progress.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.