Abstract
This article presents a brief overview of the state-of-the-art in large language models (LLMs) like ChatGPT and discusses the difficulties that these technologies create for educators with regard to assessment. Making use of the ‘arms race’ metaphor, this article argues that there are no simple solutions to the ‘AI problem’. Rather, this author shows that educators and universities will need to adopt a complex strategy consisting of solutions at four different levels of vulnerability: Ethical, pedagogic/didactic, technological and policy. Lastly, the article presents general recommendations for addressing vulnerabilities at each of these levels.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.