Abstract

Artificial intelligence (AI) has gained traction in scientific research, but concerns about plagiarism and fraud have surfaced. This study explores AI detection tools' capacity to distinguish AI-generated from human-generated text in foot and ankle surgery literature. Six publicly available AI detection tools were employed, and 12 abstracts were analyzed, including 6 AI-generated and 6 human-generated. Copyleaks demonstrated the highest raw accuracy (83 %). Overall, the tools exhibited 63 % accuracy, with a 25 % false positive rate. GPTZero, retested after three months, showed increased sensitivity (24.5 %) in identifying AI-generated content. To assess countermeasures, AI-generated abstracts were reworded using ChatGPT 3.5. The rewording led to a 54.83 % decrease in AI content detection. These findings highlight the challenges in reliably detecting AI-generated content in scientific literature, emphasizing the need for robust countermeasures and continued vigilance against potential fraudulent research. The study sheds light on the evolving landscape of AI detection technologies and emphasizes the urgency of adapting journal policies to safeguard against emerging threats.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call