Abstract

AI researchers have developed sophisticated language models capable of generating paragraphs of 'synthetic text' on topics specified by the user. While AI text generation has legitimate benefits, it could also be misused, potentially to grave effect. For example, AI text generators could be used to automate the production of convincing fake news, or to inundate social media platforms with machine-generated disinformation. This paper argues that AI text generators should be conceptualised as a dual-use technology, outlines some relevant lessons from earlier debates on dual-use life sciences research, and calls for closer collaboration between ethicists and the machine learning community to address AI language models’ dual-use implications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call