This study explores the writing quality of two AI chatbots, OpenAI ChatGPT and Google Gemini. The research assesses the quality of the generated texts based on five essay models using the T.E.R.A. software, focusing on ease of understanding, readability, and reading levels using the Flesch-Kincaid formula. Thirty essays were generated, 15 from each chatbot, and evaluated for plagiarism using two free detection tools—SmallSEOTools and Check-Plagiarism—as well as one paid tool, Turnitin. The findings revealed that both ChatGPT and Gemini performed well in terms of word concreteness but demonstrated weaknesses in narrativity. ChatGPT showed stronger performance in referential and deep cohesion, while Gemini excelled in narrativity, syntactic simplicity and word concreteness. However, a significant concern was the degree of plagiarism detected in texts from both AI tools, with ChatGPT's essays exhibiting a higher likelihood of plagiarism compared to Gemini’s. These findings highlight the potential limitations and risks associated with using AI-generated writing.
Read full abstract