Abstract

Recent breakthroughs in artificial intelligence (AI) research include advancements in natural language processing (NLP) achieved by large language models (LLMs), and; in particu- lar, generative pre–trained transformer (GPT) architectures. The latest GPT developed by OpenAI, GPT–4, has shown re- markable intelligence capabilities across various domains and tasks. It exhibits capabilities in abstraction, comprehension, vision, computer coding, mathematics, and more, suggesting it to be a significant step towards artificial general intelligence (AGI); a level of AI that possesses capabilities similar to hu- man intelligence. In this paper we (1) review the capabili- ties GPT–4 demonstrates in the above–mentioned areas, (2) study some drawbacks of autoregressive architectures, (3) high- light some areas where GPT–4 can be improved, along also with some fundamental questions on how and why this LLM achieves the intelligence it has, (4) present a potential path which could probably lead the advancement of GPT models towards super–human domain intelligence, realized by the de- velopment of products such as auto–researchers, (5) show how GPT–4 can facilitate the diffusion of knowledge across differ- ent areas of science by promoting their interpretability and explainability (IE) to inexperts, (6) discuss a broad range of influences deployment of this powerful technology will have on the society, (7) address the governance of human–competitive AI, and; finally, (8) hint on the benchmarking methods used to evaluate the capabilities of LLMs, as well as on some challenges in defining and measuring these capabilities in the framework of AGI. Where applicable, the topics are accompanied by their specific potential implications on medical imaging.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call