Abstract

Introduction:ChatGPT has attracted a lot of interest worldwide for its versatility in a range of natural language tasks, including in the education and evaluation industry. It can automate time- and labor-intensive tasks with clear economic and efficiency gains. Methods: This study evaluated the potential of ChatGPT to automate psychometric analysis of test questions from the 2020 Portuguese National Residency Selection Exam (PNA). ChatGPT was queried 100 times with the 150 MCQ from the exam. Using ChatGPT's responses, difficulty indices were calculated for each question based on the proportion of correct answers. The predicted difficulty levels were compared to the actual difficulty levels of the 2020 exam MCQ’s using methods from classical test theory. Results:ChatGPT's predicted item difficulty indices positively correlated with the actual item difficulties (r (148) = −0.372, p < .001), suggesting a general consistency between the real and the predicted values. There was also a moderate significant negative correlation between the difficulty index predicted by ChatGPT and the number of challenges (r (148) = −0.302, p < .001), highlighting ChatGPT′s potential for identifying less problematic questions. Conclusion: These findings unveiled ChatGPT’s potential as a tool for assessment development, proving its capability to predict the psychometric characteristics of high-stakes test items in automated item calibration without pre-testing in real-life scenarios.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call