Abstract

As ChatGPT became a popular and powerful language model used by people worldwide in 2023, the problem of students using it to cheat on schoolwork became palpable. While many existing AI content detectors can detect AI-generated texts, such as GPT-2 Content Detector and GPTZero, the accuracy of an AI content detector in detecting generated essays that have been post-edited by humans is unknown. This research discovered the limitations of the GPT-2 Content Detector and answered the question, “How does human post-editing of AI-generated high school English essays affect the result of an AI content detector?” Ten English essays were generated using ChatGPT Plus based on prompts from high school English teachers. Each essay was then edited in 5 different ways to create pairs of unedited and edited essays. All unedited and edited essays were evaluated using GPT-2 Output Detector Demo, and then the results from the detector were studied and analyzed. It was found that introducing spelling mistakes in generated essays and processing the essays with QuillBot will make the result of AI content detectors less accurate. The findings from this research can be used as a guide for companies developing AI-generated text detectors, making them more accurate when dealing with edited generated text. The findings can also be helpful for schools and educators, because knowing that students can edit essays to bypass AI content detectors, educators can develop new ways to examine students’ writing ability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call