Abstract

Physics education research (PER) shares a rich tradition of designing learning environments that promote valued epistemic practices such as sensemaking and mechanistic reasoning. Recent technological advancements, particularly artificial intelligence has caught significant traction in the PER community due to its human-like, sophisticated responses to physics tasks. In this study, we contribute to the ongoing efforts by comparing AI (ChatGPT) and student responses to a physics task through the cognitive frameworks of sensemaking and mechanistic reasoning. Findings highlight that by virtue of its training data set, ChatGPT’s response provide evidence of mechanistic reasoning and mimics the vocabulary of experts in its responses. On the other hand, half of students’ responses evidenced sensemaking and reflected an effective amalgamation of diagram-based and mathematical reasoning, showcasing a comprehensive problem-solving approach. Thus, while AI responses elegantly reflected how physics is talked about, a part of students’ responses reflected how physics is practiced. In a second part of the study, we presented ChatGPT with variations of the task, including an open-ended version and one with significant scaffolding. We observed significant differences in conclusions and use of representations in solving the problems across both student groups and the task formats.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call