Abstract

ABSTRACT The launch of ChatGPT sent ripples across higher education, particularly in attempts to detect its use by students using it to complete assignments. This paper reflects on the past semester of ‘life with ChatGPT’, starting with a narrative of current practices in assessment writing at the Singapore University of Social Sciences, before embarking on a test of the response generated by ChatGPT in answering a case study-based exam question. Taken together with the author’s experiences grading assignments that had been at least partially generated by artificial intelligence (AI) but not known at that time, it is surmised that case studies tend not to lend themselves well to having good-quality responses being generated by AI tools such as ChatGPT, at this current stage of their development. However, implications for assessment writing involving case studies include the need for sufficient details, both essential and peripheral, so that students have to ‘separate the wheat from the chaff’, and decipher how best to apply the concepts being examined based on the unique circumstances of the case scenario. At this point in time, the use of complex, detailed, and well-written case studies in assessment questions appears to be ChatGPT’s kryptonite and could ensure authentic assessments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call