Abstract

The use of Generative Pretrained Transformer (ChatGPT), an artificial intelligence tool, for writing scientific articles has been reason for discussion by the academic community ever since its launch in late 2022. This artificial intelligence technology is becoming capable of generating fluent language, and distinguishing between text produced by ChatGPT and that written by people is becoming increasingly difficult. Here, we will present some topics to be discussed: (1) ensuring human verification; (2) establishing accountability rules; (3) avoiding the automatization of scientific production; (4) favoring truly open-source large language models (LLMs); (5) embracing the benefits of artificial intelligence; and (6) broadening the debate. With the emergence of these technologies, it is crucial to regulate, with continuous updates, the development and responsible use of LLMs with integrity, transparency, and honesty in research, along with scientists from various areas of knowledge, technology companies, large research funding bodies, science academies and universities, editors, non-governmental organizations, and law experts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call