The current online environment has seen an explosion of false news and conspiracy theories, and questions have been raised as to whether Large Language Models or Large Generative Artificial Intelligence Models such as ChatGPT could be used to generate false information. Against this background, the current study investigated whether ChatGPT could be used as a reliable source of information about Russian military involvement in Ukraine (2014– present). Based on previous research, seven Russian narratives that have justified Russian military involvement in Ukraine (from a Russian perspective) were identified and ChatGPT was tasked with generating 10 responses to 10 questions posed around these narratives. Responses were scored for truthfulness by two annotators. Overall, the study found that ChatGPT does not generate misinformation, and on average, truthfulness was scored at 3.19/4. It is also shown how ChatGPT's responses differed between questions, with those relating to Russian claims about Ukrainian atrocities in the Donbas and the reliability of Russian media channels scoring lower on truthfulness, and those relating to the reliability of the Western mainstream media scoring the highest. Generally, ChatGPT's responses pointed to reliable and trustworthy sources of information about this conflict, suggesting that this and similar technologies could be employed to combat misinformation.
Read full abstract