Abstract

Recently, the authors had a situation in a mobile chat group (WhatsApp) when a medical lecturer was asking what value a human papillomavirus (HPV) test adds to the pap smear for cervical cancer screening. The field experts in the group replied with detailed, erudite explanations. Soon after, another lecturer posted an answer from GPT-3 based chatbot. Despite needing more depth of an expert’s reply, the chatbot gave concise answers, reframing complex medical jargon in plain English without losing crucial medical information, and more. They were easier to understand. All these with the leisure of a human-like engagement. This narration is one of countless news related to ChatGPT, which have been making headlines, academic journals included, to illustrate how the large language model technology may have disrupted conventional educational practice. One discriminatory element distinguishes this technology from all its predecessors; it is not trying to mimic a human response but responding like a human. In this writing, we navigate discussion based on the most fundamental aspect of assessment, its purpose. We revisit the concept of fidelity from the field of simulation to explain how the technology may have rejuvenated the purpose of assessment for learning (formative assessment). Then, we articulate several associated challenges in the conduct of the high-stake assessment of learning (summative assessment). We conclude with an emphasis on the purposes as the guiding principles that remain the same despite the changes in the landscape on the conduct of the assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call