Abstract

Online misinformation has become a constant; only the way actors create and distribute that information is changing. Advances in artificial intelligence (AI) mean that actors can now synthetically generate text in ways that mimic the style and substance of human-created news stories. We carried out three original experiments to study whether these AI-generated texts are credible and can influence opinions on foreign policy — a likely target of real-world misinformation. The first evaluated human detection of AI-generated text relative to the original story from which it was generated. The second examined the credibility distribution across different model sizes to gauge whether improvements in processing produce commensurate increases in credibility. The third investigated the interaction between partisanship and AI-generated news. We find that individuals are largely incapable of distinguishing between AI and human-generated text; partisanship affects the perceived credibility of the story; and exposure to the text does little to change individuals’ policy views. The findings have important implications for the way malicious actors might employ AI in online misinformation campaigns and electoral interference.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.