Abstract

The gap between predictability and comprehensibility threatens the entire scientific project because mathematical models of processes, fed by enormous amounts of data of very diverse origin, provide exceptionally precise results but, at the same time, hide the explanation of the processes. The knowledge of “what we know” of ontology is as relevant in science as that of “how we know” and “how much we know” of epistemology. Artificial intelligence (AI) involves the scientific understanding of the mechanisms underlying intelligent thought and behavior, as well as their embodiment in machines trained by their creators to reason in a conventional sense. Its “weak” formulation refers to the use of complex computer programs, designed with the purpose of complementing or assisting human reasoning to solve or complete complex problems of calculation, system maintenance, recognition of all types of images, design, analysis of data patterns, etc., many of which would be practically unapproachable using conventional procedures; but all this without including human sentient or ethical capabilities, which would be the subject of a – at the moment – non-existent “strong” AI, that would equal or even exceed human sentient intelligence. The popularization of “generative” AI, developed to create content – text, images, music or videos, among many other areas – from previous information, is helping to popularly consolidate the erroneous idea that current AI exceeds reasoning human level and exacerbates the risk of transmitting false information and negative stereotypes to people. The language models of artificial intelligence do not work by emulating a biological brain but are based on the search for logical patterns from large databases from diverse sources, which are not always updated or purged of falsehoods, errors or errors. conceptual or factual biases, both involuntary and self-serving. And the AI used in science is no stranger to these limitations and biases. A particularly sensitive issue is the possibility of using generative AI to write or even invent scientific articles that go unnoticed by the peer reviewers of the most prestigious scientific journals in the world, pointing to an even deeper problem: peer reviewers. Reviewers often do not have the time to review manuscripts thoroughly for red flags and, in many cases, they also lack adequate computing resources and specialized training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call