Abstract

Generative Pre-trained Transformers like ChatGPT are examples of AI systems which produce human-like responses in different forms such as text or images that have demonstrated excellent performance in producing logical and contextually relevant answers. However, the false positive/negative detection of generative AI has been noted as a challenge. In this article, statistical experiments are conducted to test the chances of false positive and false negative detection of AI-generated text. It was found that the detected likelihoods of generative AI in articles’ abstracts is much lower than that found in paragraphs taken from the literature section of the selected articles. This means that literature parts have higher likelihoods to falsely demonstrate AI-generated text. On the other hand, when genuine texts are compared with AI-generated texts, it is observed that there is a noticeable margin of overlap between their distributions and therefore type I and type II errors fall within the realm of possibility. We show that despite these challenges, generative AI like ChatGPT continues to be a promising tool for communication and information retrieval. However, it is vital to address the concerns regarding false detection of AI generated text and ensure that these models are used in ethical and responsible conduct.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.