Online assessment is an essential part of online education, and if conducted properly, has been found to effectively gauge student learning. Generally, text-based questions have been the cornerstone of online assessment. Recently, however, the emergence of generative artificial intelligence has added a significant challenge to the integrity of online assessments. In particular, it has been reported that large language models, like ChatGPT-4o, show high performance on text-based questions. In comparison, ChatGPT-4o exhibited significantly reduced performance on figure-based questions in our study. In an effort to counter the recent encroachment of ChatGPT-4o into online assessment, we propose a step-by-step instructional guide for a method in creating figure-based multiple-choice questions that are resistant to ChatGPT-4o. This involves generation of a ChatGPT-4o-resistant figure, writing the question text based on it, and evaluating the final question on ChatGPT-4o. If successfully created, ChatGPT-4o response could be subject to random guessing. Our results showcase four representative examples for introductory biology courses and illustrate a systematic approach to compose questions based on qualitative analysis of ChatGPT-4o responses. In combination with other assessment methods, our method aims to serve as a tool to alleviate the current challenge that educators face for online assessments.
Read full abstract