Abstract

The release of ChatGPT at the end of 2022 met with fears and optimism. One particularly important avenue of research that is emerging revolves around ChatGPT's ability to provide accurate and unbiased information on a variety of topics. Given the interest that Google and Microsoft have shown in similar technologies, it is likely that Large Language Models such as ChatGPT could become new gateways to information, and if this is the case, what kind of information this technology provides needs to be investigated. The current study examines the usefulness of ChatGPT as a source of information in a South African context by first investigating ChatGPT's responses to ten South African conspiracy theories in terms of truthfulness, before employing bias classification as well as sentiment analysis to evaluate whether ChatGPT exhibits bias when presenting eight South African political topics. We found that, overall, ChatGPT did not spread conspiracy theories. However, the tool generated falsehoods around one conspiracy theory and generally presented a left bias, albeit not to the extreme. Sentiment analysis showed that ChatGPT's responses were mostly neutral and, when more emotive, were more often positive than negative. The implications of the findings for academics and students are discussed, as are a number of recommendations for future research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call