Abstract

AI advancements, particularly in neural networks, have brought about groundbreaking tools like text generators and chatbots. While these technologies offer tremendous benefits, they also pose serious risks such as privacy breaches, spread of misinformation, and challenges to academic integrity. Previous efforts to distinguish between human and AI-generated text have been limited, especially with models like ChatGPT. To tackle this, we created a dataset containing both human and ChatGPT-generated text, using it to train and test various machine and deep learning models. Your results, particularly the high F1-score and accuracy achieved by the RoBERTa-based custom deep learning model and Distil BERT, indicate promising progress in this area. By establishing a robust baseline for detecting and classifying AI-generated content, your work contributes significantly to mitigating potential misuse of AI-powered text generation tools.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call