Abstract
ABSTRACT The study highlights ChatGPT-4’s potential in educational settings for the evaluation of university students’ open-ended written examination responses. ChatGPT-4 evaluated 54 written responses, ranging from 24 to 256 words in English. It assessed each response using five criteria and assigned a grade on a six-point scale from fail to excellent, resulting in 3,240 evaluations. Verification-based chain-of-thought prompting with the RAG framework ensured ChatGPT-4’s accurate recall of responses and secure alignment in the university’s evaluation criteria. ChatGPT-4’s grading showed good consistency with the teacher’s grading. Mistakes in recalls and discrepancies between ChatGPT-4 and teacher assessments could be reduced. The results suggest a promising potential for using LLMs like ChatGPT-4 in academic written response evaluations.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have