Abstract

In this presentation, Dr Thomas Lancaster will examine how far work on academic integrity can be faked. He will report on some of his ongoing research and experiments, much of which has depended on the widespread availability of large language models, such as ChatGPT, which can generate original text and often combine ideas in a way that appears artificially intelligent, albeit different to how a human would approach a problem. Thomas has developed several case studies of how artificial intelligence and machine learning systems can be used to generate assignment solutions, slides, computer programs, marketing materials and even academic research papers, amongst other areas. Although the ideas behind the generation can be applied to many academic disciplines, Thomas plans to share examples that are most relevant for the academic integrity community. Used as intended, artificial intelligence represents a powerful way to improve the quality of education and to better prepare students for the future. Such use also raises questions surrounding originality and authorship. Join Thomas to discover what is possible and to consider how to future-proof work in the academic integrity field.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.