Abstract

BackgroundNowadays, chatbot-written text can be present in academic documents, even without attribution. Development of an accurate manual screening paradigm would be helpful. MethodIn a series of four test manuscripts suspected of containing chatbot-written text, N=93 peculiar catchphrases were highlighted, and Google Search was used to find articles with each catchphrase. Paragraphs with the catchphrase in recent documents were checked for chatbot origin using the GPTZero detector. For paragraphs confirmed by GPTZero as likely to be chatbot-associated, the following statistics were recorded (N=50): the number of articles published with each catchphrase paragraph for time periods 2012-2014, 2015-2017, 2020-2022 (after GPT introduction), and 2023-March 2024 (after ChatGPT introduction), the citations per article, the publishing journal Impact Factor, and the document section in which the chatbot phrase appeared. ResultsN=86/93 suspected peculiar phrasings had paragraphs with chatbot association by GPTZero (92.5%). The mean number of published articles containing a chatbot-associated paragraph was 21.7 for 2012-2014, 25.6 for 2015-2017, and 43.2 for 2020-2022 versus 67.2 for 2023- March 2024 (p = 0.004). 75% of chatbot-containing articles studied were published in Impact Factor journals. The mean journal Impact Factor was 4.99, with some articles published in Impact Factor 10+ journals. Chatbot phrasing was commonly found in Abstracts and Introductions, but also in Methods, Results/Discussion, Limitations, and Conclusions. ConclusionsChatbot content often has peculiar phrasing that typically appears in other chatbot-associated documents as well. It is possible to manually detect odd chatbot phrasings. Chatbot content is increasing, and is present in top journals.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call