Abstract
Generative AI systems designed to produce text do so by drawing on inferences made from training data, which may mean they reproduce factual errors or biases contained in that data. This process is illustrated by querying ChatGPT with questions from a history of mathematics quiz designed to highlight the common occurrence of mathematical results being misattributed. ChatGPT's performance on a set of decades-old common misconceptions is mixed, illustrating the potential for these systems to reproduce and reinforce historical inaccuracies and misconceptions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have