Abstract

The current online environment has seen an explosion of false news and conspiracy theories, and questions have been raised as to whether Large Language Models or Large Generative Artificial Intelligence Models such as ChatGPT could be used to generate false information. Against this background, the current study investigated whether ChatGPT could be used as a reliable source of information about Russian military involvement in Ukraine (2014– present). Based on previous research, seven Russian narratives that have justified Russian military involvement in Ukraine (from a Russian perspective) were identified and ChatGPT was tasked with generating 10 responses to 10 questions posed around these narratives. Responses were scored for truthfulness by two annotators. Overall, the study found that ChatGPT does not generate misinformation, and on average, truthfulness was scored at 3.19/4. It is also shown how ChatGPT's responses differed between questions, with those relating to Russian claims about Ukrainian atrocities in the Donbas and the reliability of Russian media channels scoring lower on truthfulness, and those relating to the reliability of the Western mainstream media scoring the highest. Generally, ChatGPT's responses pointed to reliable and trustworthy sources of information about this conflict, suggesting that this and similar technologies could be employed to combat misinformation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.