Abstract
ChatGPT has surged interest to cause people to look for its use in different tasks. However, before allowing it to replace humans, its capabilities should be investigated. As ChatGPT has potential for use in testing and assessment, this study aims to investigate the questions generated by ChatGPT by comparing them to those written by a course instructor. To investigate this issue, this study involved 36 junior students who took a practice test including 20 multiple-choice items generated by ChatGPT and 20 others by the course instructor, resulting in a 40-item test. Results indicate that there was an acceptable degree of consistency between the ChatGPT and the course instructor. Post-hoc analyses point to consistency between the instructor and the chatbot in item difficulty, yet the chatbot’s results were weaker in item discrimination power and distractor analysis. This indicates that ChatGPT can potentially generate multiple-choice exams similar to those of the course instructor.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have