Efficiency Assessment of the Artificial Intelligence Market: Exploring the Limits

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The development of Artificial Intelligence (AI) is significantly impacting the global economy, transforming corporate strategies and enhancing operational efficiency. This study aims to analyze the relative efficiency of the Generative AI (GenAI) market, considering the market size of chips, servers, and data center infrastructure required for its operation, and comparing these market sizes with the market size of AI solutions. The study hypothesizes that the current AI market, despite its rapid development, is characterized by a catching-up nature compared to the component market and does not yet fully reflect the proportional relationship between the volumes of these markets (the hardware market and the AI solutions market). It is emphasized that the capital expenditures of technology giants on the creation of AI infrastructure have significantly increased, which may require decades to achieve a balance between the size of the hardware market that supports AI and the size of the AI solutions market itself. To assess the efficiency of the AI market, the Data Envelopment Analysis (DEA) methodology is applied, considering «inputs» (the market size of components) and "outputs" (the market size of AI solutions). The results of the DEA analysis of the GenAI market dynamics from 2016 to 2024 reveal a non-linear nature of development, starting in 2021, with a trend reversal and a decrease in efficiency indicators, which confirms the hypothesis of the catching-up nature of AI technologies compared to the component market. It is shown that fluctuations in efficiency begin three years after the deployment of the first large language models, indicating their significance for the demand for hardware, but not yet demonstrating sufficient returns in the form of a comparable growth of the AI solutions market. The limitations of the study are associated with the time interval of analysis (2016-2024) and the composition of the companies included in the analysis, which covers a majority, but not the entire, market. The novelty of the study lies in the application of DEA analysis for a comprehensive assessment of the AI market considering, but divides the component market and the technological solutions market of AI usage. The results obtained provide a critical assessment of the prospects for the development of the AI market and identify an imbalance between the «soft» (technological solutions) and «hard» (components) markets, identifying the potential for more efficient exploration and use of generative models. However, the results require further development in terms of describing the effects in different sectors of the economy.

Similar Papers
  • Research Article
  • Cite Count Icon 37
  • 10.5204/mcj.3004
ChatGPT Isn't Magic
  • Oct 2, 2023
  • M/C Journal
  • Tama Leaver + 1 more

Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w

  • Research Article
  • Cite Count Icon 8
  • 10.1287/ijds.2023.0007
How Can IJDS Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
  • Apr 1, 2023
  • INFORMS Journal on Data Science
  • Galit Shmueli + 7 more

How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?

  • Research Article
  • Cite Count Icon 2
  • 10.30884/seh/2024.01.07
The Evolution of Artificial Intelligence: From Assistance to Super Mind of Artificial General Intelligence? Article 1. Information Technology and Artificial Intelligence: The Past, Present and Some Forecasts
  • Mar 30, 2024
  • Social Evolution &amp; History
  • Leonid Grinin + 2 more

The article is devoted to the history of the development of Information and Communication Technologies (ICT) and Artificial Intelligence (AI), their current and probable future achievements, and the problems (which have already arisen, but will become even more acute in the future) associated with the development of these technologies and their active introduction in society. The close connection between the development of AI and cognitive science, the penetration of ICT and AI into various fields, in particular the field of health care, is shown. A significant part of the article is devoted to the analysis of the concept of ‘artificial intelligence’, including the definition of generative AI. We analyze recent achievements in the field of Artificial Intelligence, describe the basic models, in particular the Large Linguistic Models (LLM), and forecast the development of AI and the dangers that await us in the coming decades. We identify the forces behind the aspiration to create artificial intelligence, which is increasingly approaching the capabilities of the so-called general/universal AI, and also suggest desirable measures to limit and channel the development of artificial intelligence. The authors emphasize that the threats and dangers of the development of ICT and AI are particularly aggravated by the monopolization of their development by the state, intelligence services, large corporations and those often referred to as globalists. The article forecasts the development of computers, ICT and AI in the coming decades, and also shows the changes in society that will be associated with them. The study consists of two articles. The first, presented below, provides a brief historical overview and characterizes the current situation in the field of ICT and AI, it also analyzes the concepts of artificial intelligence, including generative AI, changes in the understanding of AI related to the emergence of the so-called large language models and related new types of AI programs (ChatGPT). The article discusses the serious problems and dangers associated with the rapid and uncontrolled development of artificial intelligence. The second article, to be published in the next issue of the journal, describes and comments on current assessments of breakthroughs in the field of AI, analyzes various forecasts, and the authors give their own assessments and forecasts of future developments. Particular attention is given to the problems and dangers associated with the rapid and uncontrolled development of AI, the fact that achievements in the field of AI are becoming a powerful means of controlling the population, imposing ideology and choice, influencing the results of elections, and a weapon for undermining security and geopolitical struggle.

  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ebiom.2023.104672
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
  • Jul 1, 2023
  • eBioMedicine
  • Stefan Harrer

Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".

  • Research Article
  • Cite Count Icon 16
  • 10.1162/daed_e_01897
Getting AI Right: Introductory Notes on AI &amp; Society
  • May 1, 2022
  • Daedalus
  • James Manyika

This dialogue is from an early scene in the 2014 film Ex Machina, in which Nathan has invited Caleb to determine whether Nathan has succeeded in creating artificial intelligence.1 The achievement of powerful artificial general intelligence has long held a grip on our imagination not only for its exciting as well as worrisome possibilities, but also for its suggestion of a new, uncharted era for humanity. In opening his 2021 BBC Reith Lectures, titled "Living with Artificial Intelligence," Stuart Russell states that "the eventual emergence of general-purpose artificial intelligence [will be] the biggest event in human history."2Over the last decade, a rapid succession of impressive results has brought wider public attention to the possibilities of powerful artificial intelligence. In machine vision, researchers demonstrated systems that could recognize objects as well as, if not better than, humans in some situations. Then came the games. Complex games of strategy have long been associated with superior intelligence, and so when AI systems beat the best human players at chess, Atari games, Go, shogi, StarCraft, and Dota, the world took notice. It was not just that Als beat humans (although that was astounding when it first happened), but the escalating progression of how they did it: initially by learning from expert human play, then from self-play, then by teaching themselves the principles of the games from the ground up, eventually yielding single systems that could learn, play, and win at several structurally different games, hinting at the possibility of generally intelligent systems.3Speech recognition and natural language processing have also seen rapid and headline-grabbing advances. Most impressive has been the emergence recently of large language models capable of generating human-like outputs. Progress in language is of particular significance given the role language has always played in human notions of intelligence, reasoning, and understanding. While the advances mentioned thus far may seem abstract, those in driverless cars and robots have been more tangible given their embodied and often biomorphic forms. Demonstrations of such embodied systems exhibiting increasingly complex and autonomous behaviors in our physical world have captured public attention.Also in the headlines have been results in various branches of science in which AI and its related techniques have been used as tools to advance research from materials and environmental sciences to high energy physics and astronomy.4 A few highlights, such as the spectacular results on the fifty-year-old protein-folding problem by AlphaFold, suggest the possibility that AI could soon help tackle science's hardest problems, such as in health and the life sciences.5While the headlines tend to feature results and demonstrations of a future to come, AI and its associated technologies are already here and pervade our daily lives more than many realize. Examples include recommendation systems, search, language translators - now covering more than one hundred languages - facial recognition, speech to text (and back), digital assistants, chatbots for customer service, fraud detection, decision support systems, energy management systems, and tools for scientific research, to name a few. In all these examples and others, AI-related techniques have become components of other software and hardware systems as methods for learning from and incorporating messy real-world inputs into inferences, predictions, and, in some cases, actions. As director of the Future of Humanity Institute at the University of Oxford, Nick Bostrom noted back in 2006, "A lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."6As the scope, use, and usefulness of these systems have grown for individual users, researchers in various fields, companies and other types of organizations, and governments, so too have concerns when the systems have not worked well (such as bias in facial recognition systems), or have been misused (as in deepfakes), or have resulted in harms to some (in predicting crime, for example), or have been associated with accidents (such as fatalities from self-driving cars).7Dædalus last devoted a volume to the topic of artificial intelligence in 1988, with contributions from several of the founders of the field, among others. Much of that issue was concerned with questions of whether research in AI was making progress, of whether AI was at a turning point, and of its foundations, mathematical, technical, and philosophical-with much disagreement. However, in that volume there was also a recognition, or perhaps a rediscovery, of an alternative path toward AI - the connectionist learning approach and the notion of neural nets-and a burgeoning optimism for this approach's potential. Since the 1960s, the learning approach had been relegated to the fringes in favor of the symbolic formalism for representing the world, our knowledge of it, and how machines can reason about it. Yet no essay captured some of the mood at the time better than Hilary Putnam's "Much Ado About Not Very Much." Putnam questioned the Dædalus issue itself: "Why a whole issue of Dædalus? Why don't we wait until AI achieves something and then have an issue?" He concluded:This volume of Dædalus is indeed the first since 1988 to be devoted to artificial intelligence. This volume does not rehash the same debates; much else has happened since, mostly as a result of the success of the machine learning approach that was being rediscovered and reimagined, as discussed in the 1988 volume. This issue aims to capture where we are in AI's development and how its growing uses impact society. The themes and concerns herein are colored by my own involvement with AI. Besides the television, films, and books that I grew up with, my interest in AI began in earnest in 1989 when, as an undergraduate at the University of Zimbabwe, I undertook a research project to model and train a neural network.9 I went on to do research on AI and robotics at Oxford. Over the years, I have been involved with researchers in academia and labs developing AI systems, studying AI's impact on the economy, tracking AI's progress, and working with others in business, policy, and labor grappling with its opportunities and challenges for society.10The authors of the twenty-five essays in this volume range from AI scientists and technologists at the frontier of many of AI's developments to social scientists at the forefront of analyzing AI's impacts on society. The volume is organized into ten sections. Half of the sections are focused on AI's development, the other half on its intersections with various aspects of society. In addition to the diversity in their topics, expertise, and vantage points, the authors bring a range of views on the possibilities, benefits, and concerns for society. I am grateful to the authors for accepting my invitation to write these essays.Before proceeding further, it may be useful to say what we mean by artificial intelligence. The headlines and increasing pervasiveness of AI and its associated technologies have led to some conflation and confusion about what exactly counts as AI. This has not been helped by the current trend-among researchers in science and the humanities, startups, established companies, and even governments-to associate anything involving not only machine learning, but data science, algorithms, robots, and automation of all sorts with AI. This could simply reflect the hype now associated with AI, but it could also be an acknowledgment of the success of the current wave of AI and its related techniques and their wide-ranging use and usefulness. I think both are true; but it has not always been like this. In the period now referred to as the AI winter, during which progress in AI did not live up to expectations, there was a reticence to associate most of what we now call AI with AI.Two types of definitions are typically given for AI. The first are those that suggest that it is the ability to artificially do what intelligent beings, usually human, can do. For example, artificial intelligence is:The human abilities invoked in such definitions include visual perception, speech recognition, the capacity to reason, solve problems, discover meaning, generalize, and learn from experience. Definitions of this type are considered by some to be limiting in their human-centricity as to what counts as intelligence and in the benchmarks for success they set for the development of AI (more on this later). The second type of definitions try to be free of human-centricity and define an intelligent agent or system, whatever its origin, makeup, or method, as:This type of definition also suggests the pursuit of goals, which could be given to the system, self-generated, or learned.13 That both types of definitions are employed throughout this volume yields insights of its own.These definitional distinctions notwithstanding, the term AI, much to the chagrin of some in the field, has come to be what cognitive and computer scientist Marvin Minsky called a "suitcase word."14 It is packed variously, depending on who you ask, with approaches for achieving intelligence, including those based on logic, probability, information and control theory, neural networks, and various other learning, inference, and planning methods, as well as their instantiations in software, hardware, and, in the case of embodied intelligence, systems that can perceive, move, and manipulate objects.Three questions cut through the discussions in this volume: 1) Where are we in AI's development? 2) What opportunities and challenges does AI pose for society? 3) How much about AI is really about us?Notions of intelligent machines date all the way back to antiquity.15 Philosophers, too, among them Hobbes, Leibnitz, and Descartes, have been dreaming about AI for a long time; Daniel Dennett suggests that Descartes may have even anticipated the Turing Test.16 The idea of computation-based machine intelligence traces to Alan Turing's invention of the universal Turing machine in the 1930s, and to the ideas of several of his contemporaries in the mid-twentieth century. But the birth of artificial intelligence as we know it and the use of the term is generally attributed to the now famed Dartmouth summer workshop of 1956. The workshop was the result of a proposal for a two-month summer project by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon whereby "An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."17In their respective contributions to this volume, "From So Simple a Beginning: Species of Artificial Intelligence" and "If We Succeed," and in different but complementary ways, Nigel Shadbolt and Stuart Russell chart the key ideas and developments in AI, its periods of excitement as well as the aforementioned AI winters. The current AI spring has been underway since the 1990s, with headline-grabbing breakthroughs appearing in rapid succession over the last ten years or so: a period that Jeffrey Dean describes in the title of his essay as a "golden decade," not only for the pace of AI development but also its use in a wide range of sectors of society, as well as areas of scientific research.18 This period is best characterized by the approach to achieve artificial intelligence through learning from experience, and by the success of neural networks, deep learning, and reinforcement learning, together with methods from probability theory, as ways for machines to learn.19A brief history may be useful here: In the 1950s, there were two dominant visions of how to achieve machine intelligence. One vision was to use computers to create a logic and symbolic representation of the world and our knowledge of it and, from there, create systems that could reason about the world, thus exhibiting intelligence akin to the mind. This vision was most espoused by Allen Newell and Hebert Simon, along with Marvin Minsky and others. Closely associated with it was the "heuristic search" approach that supposed intelligence was essentially a problem of exploring a space of possibilities for answers. The second vision was inspired by the brain, rather than the mind, and sought to achieve intelligence by learning. In what became known as the connectionist approach, units called perceptrons were connected in ways inspired by the connection of neurons in the brain. At the time, this approach was most associated with Frank Rosenblatt. While there was initial excitement about both visions, the first came to dominate, and did so for decades, with some successes, including so-called expert systems.Not only did this approach benefit from championing by its advocates and plentiful funding, it came with the suggested weight of a long intellectual tradition-exemplified by Descartes, Boole, Frege, Russell, and Church, among others-that sought to manipulate symbols and to formalize and axiomatize knowledge and reasoning. It was only in the late 1980s that interest began to grow again in the second vision, largely through the work of David Rumelhart, Geoffrey Hinton, James McClelland, and others. The history of these two visions and the associated philosophical ideas are discussed in Hubert Dreyfus and Stuart Dreyfus's 1988 Dædalus essay "Making a Mind Versus Modeling the Brain: Artificial Intelligence Back at a Branchpoint."20 Since then, the approach to intelligence based on learning, the use of statistical methods, back-propagation, and training (supervised and unsupervised) has come to characterize the current dominant approach.Kevin Scott, in his essay "I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale," reminds us of the work of Ray Solomonoff and others linking information and probability theory with the idea of machines that can not only learn, but compress and potentially generalize what they learn, and the emerging realization of this in the systems now being built and those to come. The success of the machine learning approach has benefited from the boon in the availability of data to train the algorithms thanks to the growth in the use of the Internet and other applications and services. In research, the data explosion has been the result of new scientific instruments and observation platforms and data-generating breakthroughs, for example, in astronomy and in genomics. Equally important has been the co-evolution of the software and hardware used, especially chip architectures better suited to the parallel computations involved in data- and compute-intensive neural networks and other machine learning approaches, as Dean discusses.Several authors delve into progress in key subfields of AI.21 In their essay, "Searching for Computer Vision North Stars," Fei-Fei Li and Ranjay Krishna chart developments in machine vision and the creation of standard data sets such as ImageNet that could be used for benchmarking performance. In their respective essays "Human Language Understanding & Reasoning" and "The Curious Case of Commonsense Intelligence," Chris Manning and Yejin Choi discuss different eras and ideas in natural language processing, including the recent emergence of large language models comprising hundreds of billions of parameters and that use transformer architectures and self-supervised learning on vast amounts of data.22 The resulting pretrained models are impressive in their capacity to take natural language prompts for which they have not been trained specifically and generate human-like outputs, not only in natural language, but also images, software code, and more, as Mira Murati discusses and illustrates in "Language & Coding Creativity." Some have started to refer to these large language models as foundational models in that once they are trained, they are adaptable to a wide range of tasks and outputs.23 But despite their unexpected performance, these large language models are still early in their development and have many shortcomings and limitations that are highlighted in this volume and elsewhere, including by some of their developers.24In "The Machines from Our Future," Daniela Rus discusses the progress in robotic systems, including advances in the underlying technologies, as well as in their integrated design that enables them to operate in the physical world. She highlights the limitations in the "industrial" approaches used thus far and suggests new ways of conceptualizing robots that draw on insights from biological systems. In robotics, as in AI more generally, there has always been a tension as to whether to copy or simply draw inspiration from how humans and other biological organisms achieve intelligent behavior. Elsewhere, AI researcher Demis Hassabis and colleagues have explored how neuroscience and AI learn from and inspire each other, although so far more in one than the other, as and have the success of the current approaches to AI, there are still many shortcomings and as well as problems in It is useful to on one such as when AI does not as or or or that can to or when it on or information about the world, or when it has such as of all of which can to a of public shortcomings have captured the attention of the wider public and as well as among there is an on AI and In recent years, there has been a of to principles and approaches to AI, as well as involving and such as the on AI, that to best important has been the of with to and - in the and developing AI in both and as has been well in recent This is an important in its own but also with to the of the resulting AI and, in its intersections with more the other there are limitations and problems associated with the that AI is not capable of if could to more more or more general AI. In their Turing deep learning and Geoffrey took of where deep learning and highlighted its current such as the with In the case of natural language processing, Manning and Choi the challenges in and despite the of large language Elsewhere, and have the notion that large language models do anything learning, or In & of in a and discuss the problems in systems, the as how to reason about other their systems, and well as challenges in both and especially when the include both humans and Elsewhere, and others a useful of the problems in there is a growing among many that we do not have for the of AI systems, especially as they become more capable and the of use although AI and its related techniques are to be powerful tools for research in science, as examples in this volume and recent examples in which AI not only help results but also by design and become what some have AI to science and and to and challenges for the possibility that more powerful AI could to new in science, as well as progress in some of challenges and has long been a key for many at the frontier of AI research to more capable the of each of AI, the of more general problems that to the possibility of more capable AI learning, reasoning, of and and of these and other problems that could to more capable systems the of whether current characterized by deep learning, the of and and more foundational and and reinforcement or whether different approaches are in such as cognitive agent approaches or or based on logic and probability theory, to name a few. whether and what of approaches be the AI is but many the current along with of and learning architectures have to their about the of the current approaches is associated with the of whether artificial general intelligence can be and if how and Artificial general intelligence is in to what is called that AI and for tasks and goals, such as The development of on the other aims for more powerful AI - at as powerful as is generally to problem or and, in some the capacity to and improve as well as set and its own and the of and when will be is a for most that its achievement have and as is often in and such as A through and The to Ex and it is or there is growing among many at the frontier of AI research that we for the possibility of powerful with to and and with humans, its and use, and the possibility that of could and that we these into how we approach the development of of the research and development, and in AI is of the AI and in its what Nigel Shadbolt the of AI. This is given the for useful and applications and the for in sectors of the However, a few have made the development of their the most of these are and each of which has demonstrated results of increasing still a long way from the most discussed impact of AI and automation is on and the future of This is not In in the of the excitement about AI and and concerns about their impact on a on and the was that such technologies were important for growth and and "the that but not Most recent of this including those I have been involved have and that over time, more are than are that it is the and the and the of will the In their essay AI & and John discuss these for work and further, in & the of & to discuss the with to and and as well as the opportunities that are especially in developing In "The Turing The & of Artificial Intelligence," discusses how the use of human benchmarks in the development of AI the of AI that rather than human He that the AI's development will take in this and resulting for will on the for companies, and a that the that more will be than too much from of the and does not far enough into the future and at what AI will be capable The for AI could from of that in the is and labor and ability to are and and until automation has mostly physical and but that AI will be on more cognitive and tasks based on and, if early examples are even tasks are not of the In other are now in the world machines that that learn and that their ability to do these is to a range of problems they can will be with the range to which the human has been This was and Allen Newell in that this time could be different usually two that new labor will in which will by other humans for their own even when machines may be capable of these as well as or even better than The other is that AI will create so much and all without the for human and the of will be to for when that will the that once the first time since his creation will be with his his to use his from how to the which science and interest will have for to live and and However, most researchers that we are not to a future in which the of will and that until then, there are other and that be in the labor now and in the such as and other and how humans work increasingly capable that and John and discuss in this are not the only of the by AI. Russell a of the potentially from artificial general intelligence, once a of or ten But even we to general-purpose AI, the opportunities for companies and, for the and growth as well as from AI and its related technologies are more than to pursuit and by companies and in the development, and use of AI. At the many the is it is generally that is a in AI, as by its growth in AI research, and as highlighted in several will have for companies and given the of such technologies as discussed by and others the may in the way of approaches to AI and (such as whether they are companies or as and have have the to to in AI. The role of AI in intelligence, systems, autonomous even and other of increasingly In &

  • Research Article
  • 10.1002/sd.70544
Toward Agentic Environments: GenAI and the Convergence of AI , Sustainability, and Human‐Centric Spaces
  • Dec 14, 2025
  • Sustainable Development
  • Przemek Pospieszny + 1 more

In the past few years, the evolution of artificial intelligence (AI), particularly generative AI (GenAI) and large language models (LLMs), has made human‐computer interactions more frequent, easier, and faster than ever before. This brings numerous benefits in terms of enhancing efficiency, accessibility, and convenience in various sectors from banking to health. AI tools and solutions applied in computers and communication devices support decision‐making processes and managing operations of users on the individual level as well as organisationally, including resource allocation, workflow automation, and real‐time data analysis. However, the current use of AI carries a substantial environmental footprint due to its reliance on high‐computational cloud resources. In such a context, this paper introduces the concept of agentic environments, a sustainability‐oriented AI framework that goes beyond reactive systems by leveraging GenAI, multi‐agent systems, and edge computing to minimize the negative impact of technology. These types of environments can contribute to the optimization of resource use, enhanced quality of life, and prioritization of sustainability while at the same time safeguarding user privacy through decentralized, edge‐driven AI solutions. Based on both secondary and primary data gathered during a focus group and semi‐structured interviews with AI professionals from leading technology companies, the authors provide a conceptual framework of agentic environments and discuss it in the context of three lenses, including personal sphere, professional and commercial use, and urban operations. The findings include the potential of agentic environments to foster sustainable ecosystems, mainly due to the optimisation of resource usage and securing the privacy of data. The study outlines recommendations for implementing edge‐driven deployment models to reduce dependency on currently widely applied high‐energy cloud solutions.

  • Research Article
  • 10.30884/jfio/2023.03.01
Искусственный интеллект: развитие и тревоги. Взгляд в будущее. Статья первая. Информационные технологии и искусственный интеллект: прошлое, настоящее и некоторые прогнозы
  • Sep 30, 2023
  • Философия и общество
  • Леонид Гринин + 2 more

The article is devoted to the history of development of Information and Communication Technologies (ICT) and Artificial Intelligence (AI), their current and probable future achievements and the problems (which have already arisen, but will become even more acute in the future) associated with the development of these technologies and their active introduction in society. The close connection between the development of AI and cognitive science, the penetration of ICT and AI into various fields, in particular the field of health care, is shown. A significant part of the article is devoted to the analysis of the concept of “artificial intelligence”, including the definition of generative AI. There is performed the analysis of recent achievements in the field of Artificial Intelligence, and there are given descriptions of the basic models, in particular Large Linguistic Models (LLM), and forecasts of the development of AI and the dangers that will await us in the coming decades. We identify the forces behind the aspiration to create artificial intelligence, which is increasingly approaching the capabilities of the so-called general/universal AI, and also suggest desirable measures to limit and channel the development of artificial intelligence. The authors emphasize that the threats and dangers of the development of ICT and AI are partuclarly aggrevated by the monopolization of their development by the state, intelligence services, major corporations and those often referred to as globalists. The article forecasts the development of computers, ICT and AI in the coming decades, and also shows the changes in society that will be associated with them. The study consists of two articles. The first, presented below, provides a brief historical overview and characterizes the current situation in the field of ICT and AI, it also analyzes the concepts of artificial intelligence, including generative AI, changes in the understanding of AI in connection with the emergence of the so-called large language models and related new types of AI programs (ChatGPT). The article discusses the serious problems and dangers associated with the rapid and uncontrolled development of artificial intelligence. The second article, to be published in the next issue of the journal, describes and comments on current assessments of breakthroughs in the field of AI, analyzes various forecasts, and the authors give their own assessments and forecasts of future developments. Particular attention is given to the problems and dangers associated with the rapid and uncontrolled development of AI, the fact that achievements in the field of AI are becoming a powerful means of control over the population, imposing ideology and choice, influencing the results of elections, and a weapon for undermining security and geopolitical struggle.

  • Research Article
  • Cite Count Icon 1
  • 10.51702/esoguifd.1583408
Ethical and Theological Problems Related to Artificial Intelligence
  • May 15, 2025
  • Eskişehir Osmangazi Üniversitesi İlahiyat Fakültesi Dergisi
  • Necmi Karslı

Artificial intelligence is defined as the totality of systems and programs that imitate human intelligence and can eventually surpass this intelligence over time. The rapid development of these technologies has raised various ethical debates such as moral responsibility, privacy, bias, respect for human rights, and social impacts. This study examines the technical infrastructure of artificial intelligence, the differences between weak and strong artificial intelligence, ethical issues, and theological dimensions in detail, providing a comprehensive perspective on the role of artificial intelligence in human life and the problems it brings. The historical development of artificial intelligence has been shaped by the contributions of various disciplines such as mathematical logic, cognitive science, philosophy, and engineering. From the ancient Greek philosophers to the present day, thoughts on artificial intelligence have raised deep philosophical questions such as human nature, consciousness, and responsibility. The algorithms developed by Alan Turing have contributed to the modern shaping of artificial intelligence and have put forward the first models to assess whether machines have human-like intelligence, such as the “Turing Test”. The study first analyzes the technical infrastructure of artificial intelligence in detail and discusses the current limits and potential of the technology through the distinction between weak and strong artificial intelligence. Weak artificial intelligence includes systems designed to perform specific tasks and do not exhibit general intelligence outside of those tasks, while strong artificial intelligence refers to systems with human-like general intelligence and flexible thinking capacity. Most of the widely used artificial intelligence applications today fall into the category of weak artificial intelligence. However, the development of strong artificial intelligence brings various ethical and theological consequences for humanity. The ethical issues of artificial intelligence include fundamental topics such as autonomy, responsibility, transparency, fairness, and privacy. The decision-making processes of autonomous systems raise serious ethical questions at the societal level. Especially autonomous weapons and artificial intelligence-managed justice systems raise concerns in terms of human rights and individual freedoms. In this context, the ethical framework of artificial intelligence has deep impacts on the future of humanity and human-machine interaction, not just limited to technological boundaries. From a theological perspective, the ability of artificial intelligence to imitate the human mind and creative processes raises deep theological issues such as the creativity of God, the place of human beings in the universe, and consciousness. The questions of whether artificial intelligence systems can gain consciousness and whether these conscious systems can have a spiritual status have led to new debates in theology and philosophy. The ethical principles of artificial intelligence are shaped around principles such as transparency, accountability, autonomy, human control, and data management. In conclusion, determining the ethical and theological principles that need to be considered in the development and application of artificial intelligence is critical for the future of humanity. A comprehensive examination of the ethical and theological dimensions of artificial intelligence technologies is necessary to understand and manage the social impacts of this technology. This study emphasizes the necessity of an interdisciplinary approach for the development of artificial intelligence in harmony with social values and for the benefit of humanity. The study provides an important theoretical framework for future research by shedding light on the complex ethical and theological issues arising from the development and widespread use of artificial intelligence.

  • Research Article
  • 10.2118/1224-0012-jpt
Guest Editorial: Patience in AI Development Is a Virtue: Why 99% Correct Is 100% Wrong
  • Dec 1, 2024
  • Journal of Petroleum Technology
  • Shane Mcardle

In heavy-asset industries such as oil and gas, precision is crucial. The saying "99% accurate is 100% wrong" reflects this reality. Despite the excitement of new technology, even minor errors can have significant consequences. For example, an artificial intelligence (AI) system inaccurately predicting a critical machinery component's lifespan could lead to unexpected failures, causing costly downtime and safety hazards. Kongsberg Digital, an early adopter of Microsoft's large language models (LLMs), has witnessed AI's transformative power firsthand. Over the past 2 years, these LLMs have driven a resurgence in AI interest. Generative AI's growth presents unprecedented opportunities for the heavy-asset industry, transforming interactions with complex systems. Implementing AI can optimize maintenance schedules, predict failures before they happen, and streamline operations. The rise of generative AI has also spotlighted the more traditional areas of analytics, classification, prediction, and physics-based simulations. This renewed interest has led the oil and gas industry to look for ways to utilize AI to enhance operational efficiency, reduce costs, and improve safety standards. However, AI inaccuracies or “hallucinations” are deal-breakers. AI-generated misinformation can mislead decision-makers, potentially resulting in disastrous outcomes. In some cases, using AI is unnecessary and adds little value. Heavy-asset industries prioritize safety and have historically been conservative in adopting new technologies. While post-ChatGPT developments are significant, the industry lags due to its zero tolerance for failure. This caution is justified; in environments where lives and substantial investments are at stake, even minor errors can be catastrophic. Moreover, precision in AI ensures not only safety and efficiency but also drives sustainability. Accurate AI predictions minimize waste and reduce energy consumption. Inaccurate decisions can lead to excessive energy use and waste, counteracting sustainability goals. AI can lower carbon emissions and reduce the environmental footprint by optimizing energy usage and ensuring regulatory compliance. Building trust in AI requires responsibility in development and implementation, ensuring security without compromise. We must be transparent about data lineage—a principle that guides our transformative journey. For AI to fully integrate into heavy-asset industries, it must be accurate but also secure and trustworthy. This transparency builds confidence among stakeholders, from engineers to executives.

  • Preprint Article
  • 10.20944/preprints202501.2099.v1
A Roadmap to Superintelligence: Architectures, Transformations, and Challenges in Modern AI Development
  • Jan 28, 2025
  • Ruslan Idelfonso Magana Vsevolodovna

This paper examines the trajectory of artificial intelligence (AI) development, focusing on three key stages: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI). Recent advancements in AI architectures, particularly the evolution of transformer-based models, have significantly accelerated progress across these stages, enabling more sophisticated and scalable AI systems. This paper explores the architectural foundations of ANI, AGI, and ASI, highlighting recent modifications and their implications for future AI development. Additionally, the societal, ethical, and geopolitical implications of AI are discussed, emphasizing the need for robust safeguards and governance frameworks to ensure that AI serves as a force for human advancement rather than a source of existential risk. By integrating historical comparisons, current trends, and future projections, this paper provides a comprehensive analysis of the transformative potential of AI and its impact on humanity.

  • Research Article
  • 10.1108/bfj-03-2025-0317
Leveraging artificial intelligence for innovation in wineries: a global study
  • Dec 12, 2025
  • British Food Journal
  • Giuliano Marolla + 2 more

Purpose This study explores the adoption of artificial intelligence (AI) in wineries, with a specific focus on its application to drive innovation and on the organisational and contextual factors that influence its exploration and adoption. Design/methodology/approach The research project is based on an exploratory approach employing a questionnaire developed through a literature review and refined using the Delphi method via a survey with over 500 participants. Findings Wineries employ many AI solutions, from generative AI tools that facilitate creative and agile processes to more embedded, enterprise-level AI systems that require significant investment and IT integration. They adopt a dual approach to exhibit the highest innovation orientation by integrating AI solutions to promote many dimensions of innovation. In contrast, wineries that rely exclusively on generative AI leverage it to innovate marketing processes. However, most wineries surveyed have not implemented AI solutions for process innovation, suggesting that AI development and adoption are in their infancy. Research limitations/implications The study did not examine whether the adoption of AI solutions actually generated innovation, nor was the extent of AI adoption at the micro or meso level assessed. However, the investigations may provide valuable insights and contribute to the development of more targeted strategies for the digital transformation of the wine sector. By exploring organisational and contextual dimensions, it provides a deeper understanding of the variables that may encourage or hinder the implementation of AI technologies. Originality/value This study can be leveraged by wineries as a pioneering analysis of innovation-oriented AI adoption in the wine sector in several countries. The findings offer practical value for the wine industry in promoting digital transformation and harnessing the innovative potential of AI in the wine sector.

  • Front Matter
  • 10.1093/9780198945215.003.0047
Bridging the AI Divide
  • Mar 20, 2025
  • Fola Adeleke

While the use of artificial intelligence (AI) technologies in Africa is growing, the disparities in use persist along various dimensions. This article aims to address how enabling governance frameworks can make large language models (LLMs) more inclusive through representation of low-resource languages in training data sets to enable equitable access to information and services. This article assesses emerging governance ecosystems in Africa from the perspective of coloniality and representation in generative AI and the extent to which LLMs can be used to tackle power asymmetries between African data subjects and AI developers to reduce inequalities that the adoption of generative AI may induce in Africa. By assessing the emerging national AI strategies in Africa, this article identifies a gap in AI governance frameworks across Africa specifically in relation to inclusivity in AI development. The countries briefly reviewed are Kenya, South Africa, and Nigeria due to the size and importance of their economies in sub-Saharan Africa and the recent efforts by the governments in these countries to adopt a governance framework to maximize AI technologies in their economies. With the use of case studies and developments across Africa, this article identifies three main data governance areas that will enable equitable generative AI in Africa. These are data generation and collection, regulatory sandboxes, and policy prototypes as well as data sharing. The issues addressed in this article center on data justice and the necessity for visibility, fairness, and representation in the adoption of generative AI across Africa.

  • Research Article
  • 10.1002/jgc4.2009
Generative AI and the profession of genetic counseling.
  • Mar 20, 2025
  • Journal of genetic counseling
  • Leo Meekins-Doherty + 3 more

The development of artificial intelligence (AI) including generative large language models (LLMs) and software like ChatGPT is likely to significantly influence existing workforces. Genetic counseling has been identified as a profession likely impacted by advancements of LLMs in natural language processing tasks. It is important therefore to understand LLMs before using them in practice. We provide an overview of LLMs and the strengths, biases, risks, and potential uses in genetic counseling. We discuss how these models show promise for supporting certain tasks in genetic healthcare (e.g., letter writing, triage, intake or follow-up, decision aids, chatbots, and simulations). However, any interaction between LLMs and clients or clients' confidential information raises significant ethical, regulatory, and privacy concerns that are yet to be addressed. While LLMs may excel in information processing and are making unprecedented strides with regard to communication, we highlight aspects of psychotherapeutic encounters that require human interaction. Although LLMs/chatbots can provide information relevant to genetic tests and can mimic empathy, we postulate that these interactions cannot adequately replace the personalized application of counseling theory, skills, knowledge, and decision-making provided by a human genetic counselor. We propose that LLMs show great potential for use in aspects of genetic counseling practice. A continued, strengthened philosophical focus on the counseling process and psychotherapeutic goals of practice will be an essential aspect of genetic counselors' roles in the era of AI-supported counseling. Ongoing attention to the deployment of AI in clinical contexts and the relational elements of care will help ensure quality care for clients.

  • Research Article
  • 10.16538/j.cnki.fem.20201226.301
Artificial Intelligence Marketing: A Research Review and Prospects
  • Jul 20, 2021
  • Foreign Economics & Management
  • Guowei Zhu + 4 more

The development of artificial intelligence provides new opportunities and solutions for marketing and promotes the transformation of marketing to intelligence. Many companies have begun to deploy artificial intelligence marketing. As an emerging research topic, artificial intelligence marketing has also attracted more and more researchers’ attention. However, the current research on artificial intelligence marketing is mostly based on a specific marketing scenario, and the lack of a systematic review of artificial intelligence marketing is not conducive to researchers and practitioners to systematically understand artificial intelligence marketing. In view of this, this paper combs the current research results of artificial intelligence marketing from three aspects: connotation, corporate practice, and user response.The study identifies 72 relevant literatures through the steps of identifying keywords, searching literatures, preliminarily screening, supplementing important literatures, and intensively reading literatures. Through combing the literature, this paper proposes the definition of artificial intelligence marketing, discusses its foundation, characteristics and purpose; then, it summarizes the practical application of artificial intelligence in user insight, content management, interactive delivery, and monitoring and evaluation; finally, the psychological and behavioral responses of users to data collection, delivery and recommendation, and human-computer interaction in the artificial intelligence marketing process are further sorted out. This paper summarizes the research framework of artificial intelligence marketing and discusses future research directions.The study finds that the current definition of artificial intelligence marketing has not yet formed a unified definition. Big data and artificial intelligence are its technical foundation, and intelligence is its important feature, which is mainly manifested in intelligent processing, intelligent decision-making and intelligent execution. The purpose is to realize the value co-creation of enterprises and users. In the practical application of enterprises, artificial intelligence can help enterprises insight and predict users, complete automated and intelligent content management, realize real-time interaction with users and intelligent content delivery and recommendation, and perform real-time and abnormal monitoring to help enterprises accurately grasp the marketing effect. For data collection, delivery and recommendation, and human-computer interaction in the artificial intelligence marketing process, users will have different feelings and reactions such as being served or used, understood or manipulated, and their acceptance of artificial intelligence customer service is low.The main contributions of this paper are that: First, it systematically sorts out the context of artificial intelligence marketing for the first time, which helps researchers and practitioners to strengthen their deep understanding of artificial intelligence marketing. Second, it summarizes the research framework and future research directions of artificial intelligence marketing, which can provide a certain reference for future research. Third, by exploring the practical application of artificial intelligence marketing in enterprises and the psychological and behavioral responses of users, it helps practitioners to better carry out artificial intelligence marketing.

  • Research Article
  • Cite Count Icon 1
  • 10.25313/2520-2294-2022-11-8425
ВПЛИВ ТЕХНОЛОГІЙ ШТУЧНОГО ІНТЕЛЕКТУ НА ЕФЕКТИВНІСТЬ ДІЯЛЬНОСТІ БІЗНЕСУ
  • Jan 1, 2022
  • International scientific journal "Internauka". Series: "Economic Sciences"
  • Nataliіa Skopenko + 2 more

Current challenges have accelerated the implementation of modern business concepts. Among the many practices of continuous business processes improvement is digitalization. Attention is focused on the benefits of digitalization in companies, which is to improve the processes quality, reduce their passage time, quickly fulfil orders, and hence increase customer loyalty. The concept of artificial intelligence is analysed, its three main types are identified: artificial narrow intelligence, general artificial intelligence, artificial superintelligence. Artificial narrow intelligence is focused on solving a narrowly defined, structured task; general artificial intelligence which is aimed at solving any problem, can respond to different environments and situations. Artificial superintelligence will be able to surpass people in absolutely everything, such as coping with creative tasks, decision-making and maintaining emotional relationships. The advantages of using artificial intelligence (accuracy in data processing, the ability to quickly analyse a large amount of information that will facilitate timely decision-making) are revealed. The main threats to the use of artificial intelligence (jobs disappearance, mass unemployment, loss of control over artificial intelligence – robots’ uncontrollability by humans) are also indicated. The most common technologies of artificial intelligence in enterprises (data science, machine learning, robotization) are considered. The business entities experience in the implementation of various artificial intelligence tools in operational activities, in the medical, legal, space, banking, educational spheres of activity, is presented. It was emphasized in the educational field that the annual increase in artificial intelligence is expected to reach 45% by 2030. It is also highlighted that artificial intelligence contributes to business development and global economic activity. The world's key players in the artificial intelligence market are considered, the top 10 world IT corporations are presented, the growth of their key performance indicators after the introduction of artificial intelligence technologies in goods and services is investigated.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.