Abstract
The emergence of ChatGPT and similar new Generative AI tools has created concern about the validity of many current assessment methods in higher education, since learners might use these tools to complete those assessments. Here we review the current evidence on this issue and show that for assessments like essays and multiple-choice exams, these concerns are legitimate: ChatGPT can complete them to a very high standard, quickly and cheaply. We consider how to assess learning in alternative ways, and the importance of retaining assessments of foundational core knowledge. This evidence is considered from the perspective of current professional regulations covering the professional registration of Biomedical Scientists and their Health and Care Professions Council (HCPC) approved education providers, although it should be broadly relevant across higher education.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have