Open-ended assessments play a pivotal role in enabling instructors to evaluate student knowledge acquisition and provide constructive feedback. Integrating large language models (LLMs) such as GPT-4 in educational settings presents a transformative opportunity for assessment methodologies. However, existing literature on LLMs addressing open-ended questions lacks breadth, relying on limited data or overlooking question difficulty levels. This study evaluates GPT-4’s proficiency in responding to open-ended questions spanning diverse topics and cognitive complexities in comparison to human responses. To facilitate this assessment, we generated a dataset of 738 open-ended questions across Biology, Earth Sciences, and Physics and systematically categorized it based on Bloom’s Taxonomy. Each question included eight human-generated responses and two from GPT-4. The outcomes indicate GPT-4’s superior performance over humans, encompassing both native and non-native speakers, irrespective of gender. Nevertheless, this advantage was not sustained in ’remembering’ or ’creating’ questions aligned with Bloom’s Taxonomy. These results highlight GPT-4’s potential for underpinning advanced question-answering systems, its promising role in supporting non-native speakers, and its capacity to augment teacher assistance in assessments. However, limitations in nuanced argumentation and creativity underscore areas necessitating refinement in these models, guiding future research toward bolstering pedagogical support.
Read full abstract