Teaching for Integrity in the Age of Generative Artificial Intelligence

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Teaching for Integrity in the Age of Generative Artificial Intelligence

Similar Papers
  • Research Article
  • Cite Count Icon 34
  • 10.5204/mcj.3004
ChatGPT Isn't Magic
  • Oct 2, 2023
  • M/C Journal
  • Tama Leaver + 1 more

Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w

  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ebiom.2023.104672
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
  • Jul 1, 2023
  • eBioMedicine
  • Stefan Harrer

Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".

  • Research Article
  • Cite Count Icon 1
  • 10.1108/tg-08-2025-0240
Generative AI and the urban AI policy challenges ahead: Trustworthy for whom?
  • Dec 4, 2025
  • Transforming Government: People, Process and Policy
  • Igor Calzada

Purpose This study aims to critically examine the socio-technical, economic and governance challenges emerging at the intersection of Generative artificial intelligence (AI) and Urban AI. By foregrounding the metaphor of “the moon and the ghetto” (Nelson, 1977, 2011), the issue invites contributions that interrogate the gap between technological capability and institutional justice. The purpose is to foster a multidisciplinary dialogue–spanning applied economics, public policy, AI ethics and urban governance – that can inform trustworthy, inclusive and democratically grounded AI practices. Contributors are encouraged to explore not just what GenAI can do, but for whom, how and with what consequences. Design/methodology/approach This study draws upon interdisciplinary literature from public policy, innovation studies, digital governance and urban sociology to frame the emerging governance challenges of Generative AI and Urban AI. It builds a conceptual foundation by synthesizing insights from comparative city case studies, innovation systems theory and normative policy frameworks. The approach is interpretive and exploratory, aiming to situate AI technologies within broader institutional, geopolitical and socio-economic contexts. The study invites contributions that adopt empirical, theoretical or practice-based methodologies addressing the governance of GenAI in cities and regions. Findings This study identifies a critical gap between the rapid technological advancements in Generative AI and the institutional readiness of public governance systems – particularly in urban contexts. It finds that current policy frameworks often prioritize efficiency and innovationism over democratic legitimacy, civic trust and inclusive design. Drawing on comparative global city experiences, it highlights the risk of reinforcing power asymmetries without robust accountability mechanisms. The analysis suggests that trustworthy AI is not a purely technical attribute but a political and institutional achievement, requiring participatory governance architectures and innovation systems grounded in public value and civic engagement. Research limitations/implications As an editorial introduction, this study does not present original empirical data but synthesizes key theoretical frameworks, case studies and policy debates to guide future research. Its analytical scope is conceptual and comparative, offering a foundation for submissions that further investigate Generative and Urban AI through empirical, normative and practice-based lenses. The limitations lie in its broad coverage and reliance on secondary sources. Nonetheless, it provides an agenda-setting contribution by highlighting the urgent need for interdisciplinary research into how AI reshapes public governance, institutional legitimacy and urban democratic futures. Practical implications This editorial offers a structured framework for policymakers, urban planners, technologists and public administrators to critically assess the governance of Generative and Urban AI systems. By highlighting international case studies and conceptual tools – such as public algorithmic infrastructures, civic trust frameworks and anticipatory governance – the article underscores the importance of institutional design, regulatory foresight and civic engagement. It invites practitioners to shift from techno-solutionist approaches toward inclusive, democratic and place-based AI governance. The reflections aim to support the development of trustworthy AI policies that are grounded in legitimacy, accountability and societal needs, particularly in urban and regional contexts. Social implications The editorial underscores that Generative and Urban AI systems are not socially neutral but carry significant implications for equity, representation and democratic legitimacy. These technologies risk reinforcing existing social hierarchies and systemic biases if not governed inclusively. This study calls for reimagining trust not as a technical feature but as a relational, contested dynamic between institutions and citizens. It encourages submissions that examine how AI reshapes the urban social contract, affects marginalized communities and challenges existing civic infrastructures. The goal is to promote AI governance frameworks that are pluralistic, just and reflective of diverse societal values and lived experiences. Originality/value This editorial offers a timely and conceptually grounded intervention into the emerging field of Urban AI and Generative AI governance. By framing the challenges through Richard R. Nelson’s metaphor of The Moon and the Ghetto, this study foregrounds the gap between technical capabilities and enduring societal injustices. The contribution lies in its interdisciplinary synthesis – bridging innovation systems, AI ethics, public policy and urban governance. It introduces a critical framework for assessing “trustworthy AI” not as a technical goal but as a democratic achievement and encourages research that is policy-relevant, equity-oriented and attuned to the institutional realities of AI in cities.

  • Research Article
  • Cite Count Icon 8
  • 10.1287/ijds.2023.0007
How Can IJDS Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
  • Apr 1, 2023
  • INFORMS Journal on Data Science
  • Galit Shmueli + 7 more

How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?

  • Research Article
  • Cite Count Icon 16
  • 10.1162/daed_e_01897
Getting AI Right: Introductory Notes on AI &amp; Society
  • May 1, 2022
  • Daedalus
  • James Manyika

This dialogue is from an early scene in the 2014 film Ex Machina, in which Nathan has invited Caleb to determine whether Nathan has succeeded in creating artificial intelligence.1 The achievement of powerful artificial general intelligence has long held a grip on our imagination not only for its exciting as well as worrisome possibilities, but also for its suggestion of a new, uncharted era for humanity. In opening his 2021 BBC Reith Lectures, titled "Living with Artificial Intelligence," Stuart Russell states that "the eventual emergence of general-purpose artificial intelligence [will be] the biggest event in human history."2Over the last decade, a rapid succession of impressive results has brought wider public attention to the possibilities of powerful artificial intelligence. In machine vision, researchers demonstrated systems that could recognize objects as well as, if not better than, humans in some situations. Then came the games. Complex games of strategy have long been associated with superior intelligence, and so when AI systems beat the best human players at chess, Atari games, Go, shogi, StarCraft, and Dota, the world took notice. It was not just that Als beat humans (although that was astounding when it first happened), but the escalating progression of how they did it: initially by learning from expert human play, then from self-play, then by teaching themselves the principles of the games from the ground up, eventually yielding single systems that could learn, play, and win at several structurally different games, hinting at the possibility of generally intelligent systems.3Speech recognition and natural language processing have also seen rapid and headline-grabbing advances. Most impressive has been the emergence recently of large language models capable of generating human-like outputs. Progress in language is of particular significance given the role language has always played in human notions of intelligence, reasoning, and understanding. While the advances mentioned thus far may seem abstract, those in driverless cars and robots have been more tangible given their embodied and often biomorphic forms. Demonstrations of such embodied systems exhibiting increasingly complex and autonomous behaviors in our physical world have captured public attention.Also in the headlines have been results in various branches of science in which AI and its related techniques have been used as tools to advance research from materials and environmental sciences to high energy physics and astronomy.4 A few highlights, such as the spectacular results on the fifty-year-old protein-folding problem by AlphaFold, suggest the possibility that AI could soon help tackle science's hardest problems, such as in health and the life sciences.5While the headlines tend to feature results and demonstrations of a future to come, AI and its associated technologies are already here and pervade our daily lives more than many realize. Examples include recommendation systems, search, language translators - now covering more than one hundred languages - facial recognition, speech to text (and back), digital assistants, chatbots for customer service, fraud detection, decision support systems, energy management systems, and tools for scientific research, to name a few. In all these examples and others, AI-related techniques have become components of other software and hardware systems as methods for learning from and incorporating messy real-world inputs into inferences, predictions, and, in some cases, actions. As director of the Future of Humanity Institute at the University of Oxford, Nick Bostrom noted back in 2006, "A lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."6As the scope, use, and usefulness of these systems have grown for individual users, researchers in various fields, companies and other types of organizations, and governments, so too have concerns when the systems have not worked well (such as bias in facial recognition systems), or have been misused (as in deepfakes), or have resulted in harms to some (in predicting crime, for example), or have been associated with accidents (such as fatalities from self-driving cars).7Dædalus last devoted a volume to the topic of artificial intelligence in 1988, with contributions from several of the founders of the field, among others. Much of that issue was concerned with questions of whether research in AI was making progress, of whether AI was at a turning point, and of its foundations, mathematical, technical, and philosophical-with much disagreement. However, in that volume there was also a recognition, or perhaps a rediscovery, of an alternative path toward AI - the connectionist learning approach and the notion of neural nets-and a burgeoning optimism for this approach's potential. Since the 1960s, the learning approach had been relegated to the fringes in favor of the symbolic formalism for representing the world, our knowledge of it, and how machines can reason about it. Yet no essay captured some of the mood at the time better than Hilary Putnam's "Much Ado About Not Very Much." Putnam questioned the Dædalus issue itself: "Why a whole issue of Dædalus? Why don't we wait until AI achieves something and then have an issue?" He concluded:This volume of Dædalus is indeed the first since 1988 to be devoted to artificial intelligence. This volume does not rehash the same debates; much else has happened since, mostly as a result of the success of the machine learning approach that was being rediscovered and reimagined, as discussed in the 1988 volume. This issue aims to capture where we are in AI's development and how its growing uses impact society. The themes and concerns herein are colored by my own involvement with AI. Besides the television, films, and books that I grew up with, my interest in AI began in earnest in 1989 when, as an undergraduate at the University of Zimbabwe, I undertook a research project to model and train a neural network.9 I went on to do research on AI and robotics at Oxford. Over the years, I have been involved with researchers in academia and labs developing AI systems, studying AI's impact on the economy, tracking AI's progress, and working with others in business, policy, and labor grappling with its opportunities and challenges for society.10The authors of the twenty-five essays in this volume range from AI scientists and technologists at the frontier of many of AI's developments to social scientists at the forefront of analyzing AI's impacts on society. The volume is organized into ten sections. Half of the sections are focused on AI's development, the other half on its intersections with various aspects of society. In addition to the diversity in their topics, expertise, and vantage points, the authors bring a range of views on the possibilities, benefits, and concerns for society. I am grateful to the authors for accepting my invitation to write these essays.Before proceeding further, it may be useful to say what we mean by artificial intelligence. The headlines and increasing pervasiveness of AI and its associated technologies have led to some conflation and confusion about what exactly counts as AI. This has not been helped by the current trend-among researchers in science and the humanities, startups, established companies, and even governments-to associate anything involving not only machine learning, but data science, algorithms, robots, and automation of all sorts with AI. This could simply reflect the hype now associated with AI, but it could also be an acknowledgment of the success of the current wave of AI and its related techniques and their wide-ranging use and usefulness. I think both are true; but it has not always been like this. In the period now referred to as the AI winter, during which progress in AI did not live up to expectations, there was a reticence to associate most of what we now call AI with AI.Two types of definitions are typically given for AI. The first are those that suggest that it is the ability to artificially do what intelligent beings, usually human, can do. For example, artificial intelligence is:The human abilities invoked in such definitions include visual perception, speech recognition, the capacity to reason, solve problems, discover meaning, generalize, and learn from experience. Definitions of this type are considered by some to be limiting in their human-centricity as to what counts as intelligence and in the benchmarks for success they set for the development of AI (more on this later). The second type of definitions try to be free of human-centricity and define an intelligent agent or system, whatever its origin, makeup, or method, as:This type of definition also suggests the pursuit of goals, which could be given to the system, self-generated, or learned.13 That both types of definitions are employed throughout this volume yields insights of its own.These definitional distinctions notwithstanding, the term AI, much to the chagrin of some in the field, has come to be what cognitive and computer scientist Marvin Minsky called a "suitcase word."14 It is packed variously, depending on who you ask, with approaches for achieving intelligence, including those based on logic, probability, information and control theory, neural networks, and various other learning, inference, and planning methods, as well as their instantiations in software, hardware, and, in the case of embodied intelligence, systems that can perceive, move, and manipulate objects.Three questions cut through the discussions in this volume: 1) Where are we in AI's development? 2) What opportunities and challenges does AI pose for society? 3) How much about AI is really about us?Notions of intelligent machines date all the way back to antiquity.15 Philosophers, too, among them Hobbes, Leibnitz, and Descartes, have been dreaming about AI for a long time; Daniel Dennett suggests that Descartes may have even anticipated the Turing Test.16 The idea of computation-based machine intelligence traces to Alan Turing's invention of the universal Turing machine in the 1930s, and to the ideas of several of his contemporaries in the mid-twentieth century. But the birth of artificial intelligence as we know it and the use of the term is generally attributed to the now famed Dartmouth summer workshop of 1956. The workshop was the result of a proposal for a two-month summer project by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon whereby "An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."17In their respective contributions to this volume, "From So Simple a Beginning: Species of Artificial Intelligence" and "If We Succeed," and in different but complementary ways, Nigel Shadbolt and Stuart Russell chart the key ideas and developments in AI, its periods of excitement as well as the aforementioned AI winters. The current AI spring has been underway since the 1990s, with headline-grabbing breakthroughs appearing in rapid succession over the last ten years or so: a period that Jeffrey Dean describes in the title of his essay as a "golden decade," not only for the pace of AI development but also its use in a wide range of sectors of society, as well as areas of scientific research.18 This period is best characterized by the approach to achieve artificial intelligence through learning from experience, and by the success of neural networks, deep learning, and reinforcement learning, together with methods from probability theory, as ways for machines to learn.19A brief history may be useful here: In the 1950s, there were two dominant visions of how to achieve machine intelligence. One vision was to use computers to create a logic and symbolic representation of the world and our knowledge of it and, from there, create systems that could reason about the world, thus exhibiting intelligence akin to the mind. This vision was most espoused by Allen Newell and Hebert Simon, along with Marvin Minsky and others. Closely associated with it was the "heuristic search" approach that supposed intelligence was essentially a problem of exploring a space of possibilities for answers. The second vision was inspired by the brain, rather than the mind, and sought to achieve intelligence by learning. In what became known as the connectionist approach, units called perceptrons were connected in ways inspired by the connection of neurons in the brain. At the time, this approach was most associated with Frank Rosenblatt. While there was initial excitement about both visions, the first came to dominate, and did so for decades, with some successes, including so-called expert systems.Not only did this approach benefit from championing by its advocates and plentiful funding, it came with the suggested weight of a long intellectual tradition-exemplified by Descartes, Boole, Frege, Russell, and Church, among others-that sought to manipulate symbols and to formalize and axiomatize knowledge and reasoning. It was only in the late 1980s that interest began to grow again in the second vision, largely through the work of David Rumelhart, Geoffrey Hinton, James McClelland, and others. The history of these two visions and the associated philosophical ideas are discussed in Hubert Dreyfus and Stuart Dreyfus's 1988 Dædalus essay "Making a Mind Versus Modeling the Brain: Artificial Intelligence Back at a Branchpoint."20 Since then, the approach to intelligence based on learning, the use of statistical methods, back-propagation, and training (supervised and unsupervised) has come to characterize the current dominant approach.Kevin Scott, in his essay "I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale," reminds us of the work of Ray Solomonoff and others linking information and probability theory with the idea of machines that can not only learn, but compress and potentially generalize what they learn, and the emerging realization of this in the systems now being built and those to come. The success of the machine learning approach has benefited from the boon in the availability of data to train the algorithms thanks to the growth in the use of the Internet and other applications and services. In research, the data explosion has been the result of new scientific instruments and observation platforms and data-generating breakthroughs, for example, in astronomy and in genomics. Equally important has been the co-evolution of the software and hardware used, especially chip architectures better suited to the parallel computations involved in data- and compute-intensive neural networks and other machine learning approaches, as Dean discusses.Several authors delve into progress in key subfields of AI.21 In their essay, "Searching for Computer Vision North Stars," Fei-Fei Li and Ranjay Krishna chart developments in machine vision and the creation of standard data sets such as ImageNet that could be used for benchmarking performance. In their respective essays "Human Language Understanding & Reasoning" and "The Curious Case of Commonsense Intelligence," Chris Manning and Yejin Choi discuss different eras and ideas in natural language processing, including the recent emergence of large language models comprising hundreds of billions of parameters and that use transformer architectures and self-supervised learning on vast amounts of data.22 The resulting pretrained models are impressive in their capacity to take natural language prompts for which they have not been trained specifically and generate human-like outputs, not only in natural language, but also images, software code, and more, as Mira Murati discusses and illustrates in "Language & Coding Creativity." Some have started to refer to these large language models as foundational models in that once they are trained, they are adaptable to a wide range of tasks and outputs.23 But despite their unexpected performance, these large language models are still early in their development and have many shortcomings and limitations that are highlighted in this volume and elsewhere, including by some of their developers.24In "The Machines from Our Future," Daniela Rus discusses the progress in robotic systems, including advances in the underlying technologies, as well as in their integrated design that enables them to operate in the physical world. She highlights the limitations in the "industrial" approaches used thus far and suggests new ways of conceptualizing robots that draw on insights from biological systems. In robotics, as in AI more generally, there has always been a tension as to whether to copy or simply draw inspiration from how humans and other biological organisms achieve intelligent behavior. Elsewhere, AI researcher Demis Hassabis and colleagues have explored how neuroscience and AI learn from and inspire each other, although so far more in one than the other, as and have the success of the current approaches to AI, there are still many shortcomings and as well as problems in It is useful to on one such as when AI does not as or or or that can to or when it on or information about the world, or when it has such as of all of which can to a of public shortcomings have captured the attention of the wider public and as well as among there is an on AI and In recent years, there has been a of to principles and approaches to AI, as well as involving and such as the on AI, that to best important has been the of with to and - in the and developing AI in both and as has been well in recent This is an important in its own but also with to the of the resulting AI and, in its intersections with more the other there are limitations and problems associated with the that AI is not capable of if could to more more or more general AI. In their Turing deep learning and Geoffrey took of where deep learning and highlighted its current such as the with In the case of natural language processing, Manning and Choi the challenges in and despite the of large language Elsewhere, and have the notion that large language models do anything learning, or In & of in a and discuss the problems in systems, the as how to reason about other their systems, and well as challenges in both and especially when the include both humans and Elsewhere, and others a useful of the problems in there is a growing among many that we do not have for the of AI systems, especially as they become more capable and the of use although AI and its related techniques are to be powerful tools for research in science, as examples in this volume and recent examples in which AI not only help results but also by design and become what some have AI to science and and to and challenges for the possibility that more powerful AI could to new in science, as well as progress in some of challenges and has long been a key for many at the frontier of AI research to more capable the of each of AI, the of more general problems that to the possibility of more capable AI learning, reasoning, of and and of these and other problems that could to more capable systems the of whether current characterized by deep learning, the of and and more foundational and and reinforcement or whether different approaches are in such as cognitive agent approaches or or based on logic and probability theory, to name a few. whether and what of approaches be the AI is but many the current along with of and learning architectures have to their about the of the current approaches is associated with the of whether artificial general intelligence can be and if how and Artificial general intelligence is in to what is called that AI and for tasks and goals, such as The development of on the other aims for more powerful AI - at as powerful as is generally to problem or and, in some the capacity to and improve as well as set and its own and the of and when will be is a for most that its achievement have and as is often in and such as A through and The to Ex and it is or there is growing among many at the frontier of AI research that we for the possibility of powerful with to and and with humans, its and use, and the possibility that of could and that we these into how we approach the development of of the research and development, and in AI is of the AI and in its what Nigel Shadbolt the of AI. This is given the for useful and applications and the for in sectors of the However, a few have made the development of their the most of these are and each of which has demonstrated results of increasing still a long way from the most discussed impact of AI and automation is on and the future of This is not In in the of the excitement about AI and and concerns about their impact on a on and the was that such technologies were important for growth and and "the that but not Most recent of this including those I have been involved have and that over time, more are than are that it is the and the and the of will the In their essay AI & and John discuss these for work and further, in & the of & to discuss the with to and and as well as the opportunities that are especially in developing In "The Turing The & of Artificial Intelligence," discusses how the use of human benchmarks in the development of AI the of AI that rather than human He that the AI's development will take in this and resulting for will on the for companies, and a that the that more will be than too much from of the and does not far enough into the future and at what AI will be capable The for AI could from of that in the is and labor and ability to are and and until automation has mostly physical and but that AI will be on more cognitive and tasks based on and, if early examples are even tasks are not of the In other are now in the world machines that that learn and that their ability to do these is to a range of problems they can will be with the range to which the human has been This was and Allen Newell in that this time could be different usually two that new labor will in which will by other humans for their own even when machines may be capable of these as well as or even better than The other is that AI will create so much and all without the for human and the of will be to for when that will the that once the first time since his creation will be with his his to use his from how to the which science and interest will have for to live and and However, most researchers that we are not to a future in which the of will and that until then, there are other and that be in the labor now and in the such as and other and how humans work increasingly capable that and John and discuss in this are not the only of the by AI. Russell a of the potentially from artificial general intelligence, once a of or ten But even we to general-purpose AI, the opportunities for companies and, for the and growth as well as from AI and its related technologies are more than to pursuit and by companies and in the development, and use of AI. At the many the is it is generally that is a in AI, as by its growth in AI research, and as highlighted in several will have for companies and given the of such technologies as discussed by and others the may in the way of approaches to AI and (such as whether they are companies or as and have have the to to in AI. The role of AI in intelligence, systems, autonomous even and other of increasingly In &

  • Conference Article
  • 10.54941/ahfe1004960
Democracy and Artificial General Intelligence
  • Jan 1, 2024
  • Elina Kontio + 1 more

We may have to soon decide what kind of Artificial General Intelligence (AGI) computers we will build and how they will coexist with humans. Many predictions estimate that artificial intelligence will surpass human intelligence during this century. This poses a risk to humans: computers may cause harm to humans either intentionally or unintentionally. Here we outline a possible democratic society structure that will allow both humans and artificial general intelligence computers to participate peacefully in a common society.There is a potential for conflict between humans and AGIs. AGIs set their own goals which may or may not be compatible with the human society. In human societies conflicts can be avoided through negotiations: all humans have the about the same world view and there is an accepted set of human rights and a framework of international and national legislation. In the worst case, AGIs harm humans either intentionally or unintentionally, or they can deplete the human society of resources.So far, the discussion has been dominated by the view that the AGIs should contain fail-safe mechanisms which prevent conflicts with humans. However, even though this is a logical way of controlling AGIs we feel that the risks can also be handled by using the existing democratic structures in a way that will make it less appealing to AGIs (and humans) to create conflicts.The view of AGIs that we use in this article follows Kantian autonomy where a device sets goals for itself and has urges or drives like humans. These goals may conflict with other actors’ goals which leads to a competition for resources. The way of acting and reacting to other entities creates a personality which can differ from AGI to AGI. The personality may not be like a human personality but nevertheless, it is an individual way of behaviour.The Kantian view of autonomy can be criticized because it neglects the social aspect. The AGIs’ individual level of autonomy determines how strong is their society and how strongly integrated they would be with the human society. The critic of their Kantian autonomy is valid, and it is here that we wish to intervene.In Kantian tradition, conscious humans have free will which makes them morally responsible. Traditionally we think that computers, like animals lack free will or, perhaps, deep feelings. They do not share human values. They cannot express their internal world like humans. This affects the way that AGIs can be seen as moral actors. Often the problem of constraining AGIs has used a technical approach, placing different checks and designs that will reduce the likelihood of adverse behaviour towards humans. In this article we take another point of view. We will look at the way humans behave towards each other and try to find a way of using the same approaches with AGIs.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 239
  • 10.1057/s41599-020-0494-4
Why general artificial intelligence will not be realized
  • Jun 17, 2020
  • Humanities and Social Sciences Communications
  • Ragnar Fjelland

The modern project of creating human-like artificial intelligence (AI) started after World War II, when it was discovered that electronic computers are not just number-crunching machines, but can also manipulate symbols. It is possible to pursue this goal without assuming that machine intelligence is identical to human intelligence. This is known as weak AI. However, many AI researcher have pursued the aim of developing artificial intelligence that is in principle identical to human intelligence, called strong AI. Weak AI is less ambitious than strong AI, and therefore less controversial. However, there are important controversies related to weak AI as well. This paper focuses on the distinction between artificial general intelligence (AGI) and artificial narrow intelligence (ANI). Although AGI may be classified as weak AI, it is close to strong AI because one chief characteristics of human intelligence is its generality. Although AGI is less ambitious than strong AI, there were critics almost from the very beginning. One of the leading critics was the philosopher Hubert Dreyfus, who argued that computers, who have no body, no childhood and no cultural practice, could not acquire intelligence at all. One of Dreyfus’ main arguments was that human knowledge is partly tacit, and therefore cannot be articulated and incorporated in a computer program. However, today one might argue that new approaches to artificial intelligence research have made his arguments obsolete. Deep learning and Big Data are among the latest approaches, and advocates argue that they will be able to realize AGI. A closer look reveals that although development of artificial intelligence for specific purposes (ANI) has been impressive, we have not come much closer to developing artificial general intelligence (AGI). The article further argues that this is in principle impossible, and it revives Hubert Dreyfus’ argument that computers are not in the world.

  • Research Article
  • 10.1108/dts-08-2025-0255
User readiness and technology adoption in AI-driven smart cities: a systematic review of generative and predictive models for advancing the SDGs
  • Dec 4, 2025
  • Digital Transformation and Society
  • Nuning Kristiani + 3 more

Purpose This study examines the integration of generative and predictive artificial intelligence (AI) models within smart cities, focusing on how user readiness and technology adoption influence their contribution to sustainable urban development and governance. Design/methodology/approach The study applies a systematic literature review following PRISMA guidelines and synthesizes evidence from 50 peer-reviewed studies (2018–2025) indexed in Scopus and Web of Science. It combines bibliometric mapping using VOSviewer with thematic analysis to examine the drivers, barriers and governance mechanisms shaping the adoption of generative, predictive and hybrid applications in urban contexts. Findings Generative AI fosters participatory engagement, citizen co-design and interactive simulations, advancing SDG 11 (Sustainable Cities and Communities) and SDG 4 (Quality Education) through enhanced digital literacy and inclusive planning. Predictive AI improves operational efficiency, forecasting accuracy and data-driven policymaking, supporting SDG 9 (Industry, Innovation and Infrastructure) and SDG 13 (Climate Action) by promoting sustainable resource use and climate-resilient management. Hybrid AI integrates these strengths, addressing both social and operational aspects of smart city development and aligning with SDG 17 (Partnerships for the Goals) through cross-sector collaboration and shared governance. Collectively, these models contribute to broader sustainability goals, including SDGs 3, 7 and 12. Research limitations/implications This review acknowledges several key limitations. Reliance on Scopus and Web of Science may exclude regionally significant or domain-specific studies not indexed in these databases. The focus on English-language publications introduces potential language bias, possibly overlooking relevant research from non-English-speaking regions. Restricting the timeframe to 2018–2025 captures recent developments but may omit earlier foundational work or the most recent studies not yet indexed. Differences in research design, policy contexts and sample characteristics also affect comparability and limit generalizability. Future research should broaden data sources, include multilingual literature and adopt mixed-methods and longitudinal approaches to enhance contextual diversity and empirical robustness. Practical implications The findings provide practical guidance for policymakers, urban planners and technology developers to design AI governance systems that are transparent, accountable and aligned with the SDGs. Integrating generative and predictive AI can enhance operational efficiency, support participatory planning and promote responsible decision-making. The findings inform the development of adaptive policy frameworks that advance SDG 9 (Industry, Innovation and Infrastructure), SDG 11 (Sustainable Cities and Communities) and SDG 13 (Climate Action) through digital literacy initiatives, cross-sector collaboration and data-informed management. Strengthening these practices enables cities to translate AI’s potential into tangible contributions to inclusive and sustainable urban transformation. Social implications Integrating user readiness and digital literacy into AI adoption is essential for building inclusive and trustworthy smart cities. These efforts support SDG 4 (Quality Education), SDG 10 (Reduced Inequalities) and SDG 16 (Peace, Justice and Strong Institutions). Generative AI encourages citizen participation and collaborative planning, while predictive AI improves service accessibility and data-informed governance. Promoting ethical awareness and community engagement helps narrow digital divides and address bias. Collectively, these elements advance SDG 11 (Sustainable Cities and Communities) and SDG 17 (Partnerships for the Goals) by fostering socially responsive and transparent AI-driven urban development. Originality/value This review is among the first to integrate perspectives on user readiness and technology adoption with comparative insights into generative and predictive AI in smart cities. It advances understanding of how AI-driven urban innovation supports inclusivity, efficiency and sustainability, while outlining policy directions and a future research agenda for equitable and transparent AI governance.

  • Research Article
  • 10.1152/advan.00119.2025
Concepts behind clips: cinema to teach the science of artificial intelligence to undergraduate medical students.
  • Dec 1, 2025
  • Advances in physiology education
  • Krishna Mohan Surapaneni

As artificial intelligence (AI) is becoming more integrated into the field of healthcare, medical students need to learn foundational AI literacy. Yet, traditional, descriptive teaching methods of AI topics are often ineffective in engaging the learners. This article introduces a new application of cinema to teaching AI concepts in medical education. With meticulously chosen movie clips from "Enthiran (Tamil)/Robot (Hindi)/Robo (Telugu)" movie, the students were introduced to the primary differences between artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super intelligence (ASI). This method triggered encouraging responses from students, with learners indicating greater conceptual clarity and heightened interest. Film as an emotive and visual medium not only makes difficult concepts easy to understand but also encourages curiosity, ethical consideration, and higher order thought. This pedagogic intervention demonstrates how narrative-based learning can make abstract AI systems more relatable and clinically relevant for future physicians. Beyond technical content, the method can offer opportunities to cultivate critical engagement with ethical and practical dimensions of AI in healthcare. Integrating film into AI instruction could bridge the gap between theoretical knowledge and clinical application, offering a compelling pathway to enrich medical education in a rapidly evolving digital age.NEW & NOTEWORTHY This article introduces a new learning strategy that employs film to instruct artificial intelligence (AI) principles in medical education. By introducing clips the from "Enthiran (Tamil)/Robot (Hindi)/Robo (Telugu)" movie to clarify artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super intelligence (ASI), the approach converted passive learning into an emotionally evocative and intellectually stimulating experience. Students experienced enhanced comprehension and increased interest in artificial intelligence. This narrative-driven, visually oriented process promises to incorporate technical and ethical AI literacy into medical curricula with enduring relevance and impact.

  • Research Article
  • 10.70777/si.v1i1.11101
Highlights of the Issue
  • Oct 15, 2024
  • SuperIntelligence - Robotics - Safety &amp; Alignment
  • Kristen Carlson

To emphasize the journal’s concern with AGI safety, we inaugurate Artificial General Intelligence (AGI) by focusing the first issue on Risks, Governance, and Safety &amp; Alignment Methods. Risks The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks from Artificial Intelligence The most comprehensive AI risk taxonomy — 777 specific risks classified into 43 categories — to date has been created by workers collaborating from a half-dozen institutions. We except 11 key pages from the original 79-page report. Their ‘living’ Repository is online and free to download and share. The authors’ intention is to provide a common frame of reference for AI risks. Slattery et al.’s set of ~100 references is excellent and thorough. Thus, pouring through this study for your own specific interest is an efficient way to get on top of the entire current AI risk literature. The highest of their three taxonomy levels, the Causal Taxonomy, is categorized according to the cause of the risk, Human or AI the intention , Intentional action or Unintentional, and timing — Pre-deployment of the AI system or Post-deployment. The Causal Taxonomy can be used “for understanding how, when, or why risks from AI may emerge.” They also call readers’ attention to the AI Incident Database.[1] The Incident Database publishes a monthly roundup here. AI Risk Categorization Decoded (AIR 2024) By examining 8 government and 16 corporate AI risk policies, Zeng et al. seek to provide an AI risk taxonomy unified across public and private sector methodologies. They present 314 risk categories organized into a 4-level hierarchy. The highest level is composed of System &amp; Operational Risks, Content Safety Risks, Societal Risks, and Legal &amp; Rights Risks. Their first takeaway from their analysis is more categories is advantageous, allowing finer granularity in identifying risks and unifying risk categories across methodologies. Thus, indirectly they argue for the Slattery et al. taxonomy with double the categories. This emphasis on fine granularity parallels a comment made to me by Lance Fortnow, Dean of Illinois Institute of Technology College of Computing, on the diversity and specificity of human laws indicating a similar diversity may be necessary to assure AGI safety, and that recent governance proposals may be simplistic. Indeed, Zeng et al.’s second takeaway is that government AI regulation may need significant expansion. Few regulations address foundation models, for instance. And their third takeaway is that comparing AI risk policies from diverse sources is extremely helpful to develop an overall grasp of the issues – how different organizations conceptualize risk, for instance – and how to move toward international cooperation to manage AI risk. AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies Applying the work just described above, Zeng et al. constructed an AI safety benchmark aligned with their unified view of private and public sector AI risk policy and specifically targeting the gap in regulation of foundation models they uncovered. They develop and test nearly 6000 risky prompts and find inconsistent responses across foundation models. Zeng et al. give examples of foundation model safety failures in response to various prompts. This work seems a significant advance toward an AGI safety certification conducted by an AI industry consortium or an insurance company consortium along the lines of, e.g., UL Solutions (previously Underwriters’ Laboratory). A Comprehensive Survey of Advanced Persistent Threat Attribution We wanted to publish this important article had to pull it due to a license conflict – please see their arXiv preprint. APT [Advanced Persistent Threat] attacks are attack campaigns orchestrated by highly organized and often state-sponsored threat groups that operate covertly and methodically over prolonged periods. APTs set themselves apart from conventional cyber-attacks by their stealthiness, persistence, and precision in targeting. This systematic review by Rani et al. of 137 papers focuses on the increasing development of automated means to detect AI and ML APTs early and identify the malevolent actors involved. They present the Automated Attribution Framework, which consists of 1) collecting the training data of past attacks, 2) preprocessing and enrichment of the training data, 3) the actual training and pattern recognition on the data, and 4) attribution — applying the trained models to identify the malevolent perpetrating actors. The open research questions summarized by Rani et al. lead toward AI taking an increasing role in APT attribution. Governance Excerpts from Aschenbrenner, Situational Awareness I was pointed to Leopold Aschenbrenner’s 165-page missive by Scott Aaronson’s blog, which said he knew Leopold during his sabbatical at OpenAI and recommended people give it a read and take it seriously. The essence of it is that if we extrapolate from recent AI progress, we will have AGI by 2030, and therefore, for national security, a Manhattan Project-style national AI effort, including nationalizing leading private AGI labs, should be mounted. Here we reprint his Part IV, “The Project,” advocating this controversial effort and describing his vision of how it will occur. I recommend anyone concerned about the dangers of AGI, and especially those working toward AGI, read Aschenbrenner’s entire book. Take a look at the Table of Contents preceding our reprint of “The Project.” And we reprint his Ch. V, “Parting Thoughts,” in our Commentary section. Soft Nationalization: How the US Government Will Control AI Labs Aschenbrenner advocates nationalizing leading AI labs into a high-security, top-secret, US federal government project. OK, how, exactly? A perfect complement to Aschenbrenner’s thoughts is given by Deric Cheng and Corin Katzke of Convergence Analysis. They examine how AGI R&amp;D nationalization could happen realistically, effectively, and efficiently. Their report outlines key issues and initial thoughts as a prelude to their own and others’ detailed proposals to come. It is a beautiful piece of work, IMHO. It is not impossible for private companies to develop AGI responsibly and securely, but the main goal of this journal is to make AGI safety the central debate in the AGI community, and the nationalized, Manhattan-style project point of view must be presented. Further, I find Aschenbrenner’s arguments to be persuasive and Cheng and Katzke’s thoughtful outline of how nationalization could actually occur to be convincing, e.g. (pg. 8): The US may be able to achieve its national security goals with substantially less overhead than total nationalization via effective policy levers and regulation… We argue that various combinations of the policy levers listed below will likely be sufficient to meet US national security concerns, while allowing for more minimal governmental intrusion into private frontier AI development. Acceptable Use Policies for Foundation Models Acceptable use policies are legally binding policies that prohibit specific uses of foundation models. Klyman surveys acceptable use policies from 30 developers encompassing 127 specific use restrictions cited in 184 articles. Like Zeng et al. in “AI Risk Categorization Decoded (AIR 2024),” Klyman highlights the inconsistent number and type of restrictions across developers and lack of transparency behind their motivation and enforcement, indicating the need to for developers to create a unified consensus acceptable use policy. The general motivations are to reduce legal and reputational risk. However, standing in the way of developers working to create a unified policy set is the motivation to use restrictions to hinder competition from using proprietary models. Enforcement can also hinder effective use of a foundation model. Acceptable use policies can be categorized into content restrictions (e.g. the top 4: misinformation, harassment, privacy, discrimination) and end use restrictions, e.g. Anthropic’s restriction on “model scraping,” which is someone training their own AI model on prompts and outputs from Anthropic’s model. Another use restriction is scaling up AI-created content distribution such as automated online posting. As with the Zeng et al. articles, Klyman’s article points the way to create a homogeneous acceptable use policy across a diverse AI ecosystem. Steve Omohundro comments: “…the AI labs’ ‘alignment work’ … is all about the AIs rather than their impact on the world. For goodness sake, the Chinese People's Liberation Army has already fine-tuned Meta's Llama 3.1 to promote Chinese military goals! And Meta's response was ‘that's contrary to our acceptable use policy!’" From the article: Without information about how acceptable use policies are enforced, it is not obvious that they are actually being implemented or effective in limiting dangerous uses. Companies are moving quickly to deploy their models and may in practice invest little in establishing and maintaining the trust and safety teams required to enforce their policies to limit risky uses. Safety Methods Benchmark Early and Red Team Often (Executive Summary excerpt) Two leading methods for uncovering AI safety breaches are 1) inexpensive benchmarking against a standardized test suite, such as prompts for large language models, and 2) longer, higher-cost but more informative intensive, interactive testing by human domain experts (“red-teaming”). Barrett et al., from the UC Berkeley Center for Long-Term Cybersecurity, advocate for this two-pronged approach indicated by the article title. They analyze the methods’ potential for eliminating LLM “dual” use, i.e. corrupting LLMs into creating chemical, biological, radiological, nuclear (CBRN) or cyber or other weaponry or attacks, but the methods apply to less dangerous risk testing as well. Essentially Barrett et al. advocate frequent use of benchmarks until a model attains a high safety score, followed by intensive red-teaming to test the model in more depth and yield more accuracy. Their paraphrase of the article title is: Benchmark Early and Often, and Red-Team Often Enough. Against Purposeful Artificial Intelligence Failures A paper that had to be written, and not surprisingly was, by Yampolskiy, who has sought to cover every aspect of AGI risks, is one arguing that intentionally triggering an AI disaster should not be entertained as an option to alert humanity to the danger of AGI. Models That Prove Their Own Correctness Especially in light of Dalrymple et al.’s governance proposal, Toward Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems, ‘models that prove their own correctness’ seems especially desirable, if not essential. Dalrymple et al. call for 1) a world model, 2) a safety specification, and 3), a means to verify the safety specification, a highly intriguing proposal, but which falls short of providing an example of such a model or means of verification (we hear that Dalrymple is working on an example). Paradise et al. describe two uses of interactive proof systems (IPS) combined with ML to allow a model to prove its own ‘correctness,’ as specified by the user of the model. The first method requires access to a training set of IPS transcripts (the sequence of interactions between the Verifier and Prover) in which the Verifier accepted the Prover’s probabilistic proof. The second method, Reinforcement Learning from Verifier Feedback (RLVF; note their intentional similarity to Reinforcement Learning from Human Feedback, RLHF) avoids the need for the accepted transcripts (which are in essence an external truth oracle) but only after training on such a verified transcript (its ‘base model’) using transcript learning. From then on it can generate its own emulated verified transcripts. The paper opens the door to other innovative applications of ML to IPS. This is a rather deep paper that requires further analysis to judge the realization of its promise. We look forward to a revised version after its peer review at an unspecified journal. We thank Syed Rafi for the pointer to the paper and Quinn Dougherty for inviting Orr Paradise to his safe AGI reading group. Language-Guided World Models: A Model-Based Approach to AI Control Model-based agents are artificial agents equipped with probabilistic “world models” that are capable of foreseeing the future state of an environment (Deisenroth and Rasmussen, 2011; Schmidhuber, 2015). World models endow these agents with the ability to plan and learn in imagination (i.e., internal simulation)…. Citing Dalrymple et al., Zhang et al. likewise extend the capabilities of world models to increase human control over AI. By adjusting the world model, humans can affect many context-sensitive policies simultaneously. However, for the human-AI interaction to be efficient, the world model must process natural language (NLP); hence, language-guided world models (LWMs). NLP also increases the efficiency of model learning by permitting them to read text. World models increase AI transparency, which NL interaction furthers by allowing humans to query models verbally. As an example, in Sec. 5.3, “Application: Agents that discuss plans with humans,” Zhang et al. describe an agent that uses its LWM to plan a task and then ask a human to review it for safety. Commentary Steve Omohundro, “Progress in Superhuman Theorem Proving?” Our co-founding editor Steve Omohundro is a strong proponent of Provably Safe AI, in which automated theorem-proving will play a major role.[2] Here Steve discusses current developments in using proof to lessen LLM hallucinations, the implications of superhuman theorem-proving for safe AGI and resources for interested readers. On Yampolskiy, “Against Purposeful Artificial Intelligence Failures” Topic Editor Jim Miller, Professor of Economics, Game Theory, and Sociology at Smith College, critiques Roman Yampolskiy’s argument against triggering a deliberate AI failure to wake the world up to AI dangers. Leopold Aschenbrenner, Situational Awareness, “Parting Thoughts” Aschenbrenner dismisses his critics as unrealistic and outlines the core tenets of “AI Realism.” Rowan McGovern, “Unhobbling Is All You Need?” Commentary on Aschenbrenner’s Situational Awareness McGovern questions Aschenbrenner’s fundamental assumption that “unhobbling” alone — “fixing obvious ways in which models are hobbled by default, unlocking latent capabilities and giving them tools, leading to step-changes in usefulness” — will result in his extrapolation of recent AI progress to predict the advent of AGI by 2030. McGovern: “Unhobbling conflates computing power with intelligence.” [1] https://incidentdatabase.ai/. “Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.” [2] Tegmark, M., &amp; Omohundro, S. (2023). Provably safe systems: the only path to controllable AGI. arXiv. Retrieved from https://arxiv.org/abs/2309.01933.

  • Research Article
  • Cite Count Icon 2
  • 10.4467/29567610pib.24.002.19838
Sztuczna inteligencja a bezpieczeństwo państwa
  • Jun 10, 2024
  • Prawo i Bezpieczeństwo
  • Norbert Malec

Technologically advanced artificial intelligence (AI) is making a significant contribution to strengthening national security. AI algorithms facilitate the processing of vast amounts of information, increasing the speed and accuracy of decision-making. Artificial intelligence and machine learning (AI/ML) are crucial for state and integrated hybrid attacks and protecting new threats in cyberspace. Existing AI capabilities have significant potential to impact national security by leveraging existing machine learning technology for automation in labor-intensive activities such as satellite imagery analysis and defense against cyber attacks. This article examines selected aspects of the impact of artificial intelligence on enhancing a state’s ability to protect its interests and its citizens., artificial intelligence through the use of neutron networks, predictive analytics and machine learning algorithms enables security agencies to analyse vast amounts of data and identify patterns indicative of potential threats. Integrating artificial intelligence into surveillance, border control and threat assessment systems enhances the ability to respond preemptively to security challenges. In addition, artificial intelligence algorithms facilitate the processing of vast amounts of information, increasing the speed and accuracy of decision-making by police authorities. The rapid development of AI raises a number of questions for its use in securing not only national security but protecting all citizens. In particular, it is worth answering the question How does artificial intelligence affect national security and clarifying the issue of how law enforcement agencies can use artificial intelligence to maximise the benefits of the new technology in terms of security and protecting communities from rising crime. The analysis is based on a descriptive method in describing the phenomenon; by explaining the concepts and applications of artificial intelligence to determine its role in the national security sphere. An analysis of the usefulness of artificial intelligence in particular in police operations is undertaken, with the aim of defending the thesis that, despite some threats to the protection of human rights from AI, it is becoming the best tool in the fight against all types of crime in the country. Technological advances in AI can also have many positive effects for law enforcement, and useful for law enforcement agencies, for example in facilitating the identification of persons or vehicles, predicting trends in criminal activities, tracking illegal criminal activities or illegal money flows, flagging and responding to fake news. Artificial intelligence (AI) has emerged as one of the biggest threats to information security, but efforts are being made to mitigate this new threat, but also to find solutions on how AI can become an ally in the fight against cyber-security, crime and terrorist threats. Artificial intelligence algorithms search huge datasets of communication traffic, satellite images and social media posts to identify potential cyber security threats, terrorist activities and organized crime. It is advisable, when analyzing the opportunities and threats that AI poses to national and public security, to gain a strategic advantage in the context of rapid technological change and also to manage the many risks associated with AI. The conclusion highlights the impact of AI on national security, creating a range of new opportunities coupled with challenges that government agencies should be prepared for in addressing ethical and security dilemmas. Furthermore, AI improves predictive analytics, thereby enabling security agencies to more accurately anticipate potential threats and enhance their preparedness by identifying vulnerabilities in the national security infrastructure

  • Research Article
  • 10.17509/ijotis.v5i1.82626
The Future of Teaching: Artificial Intelligence (AI) And Artificial General Intelligence (AGI) For Smarter, Adaptive, and Data-Driven Educator Training
  • Nov 21, 2024
  • Indonesian Journal of Teaching in Science
  • Kumar Balasubramanian

The fast evolution of Artificial Intelligence (AI) and the developing Artificial General Intelligence (AGI) capabilities transform how education operates, particularly through its effect on teacher training. AI-based systems provide adaptable learning spaces, and they offer both real-time assessment capabilities and data-driven educational method improvements. With its capability for human-level cognitive operations, AGI creates conditions to transform educator skill advancement processes. The article examines AI and AGI integration within teacher education programs by discussing their practical uses and advantages, together with the encountered challenges and ethical dilemmas. The analysis combines evaluative and creative AI tools like Gradescope and ChatGPT, and Carnegie Learning, with developing capabilities in AGI. The article uses detailed analysis, together with tables, along pictorial representations to show the necessity of achieving optimal teacher training through AI-human balanced cooperation. The research finds that AI brings efficiency benefits, but AGI's prospective function needs strict governance together with educational alignment, to maintain ethical, unbiased teacher education.

  • Book Chapter
  • 10.2174/9789815165739123010004
Artificial General Intelligence; Pragmatism or an Antithesis?
  • Nov 23, 2023
  • K Ravi Kumar Reddy + 2 more

Artificial intelligence is promoted by means of incomprehensible advocacy through business majors that cannot easily be equated with human consciousness and abilities. Behavioral natural systems are quite different from language models and numeric inferences. This paper reviews through centuries of evolved human knowledge, and the resolutions as referred through the critics of mythology, literature, imagination of celluloid, and technical work products, which are against the intellect of both educative and fear mongering. Human metamorphic abilities are compared against the possible machine takeover and scope of envisaged arguments across both the worlds of ‘Artificial Intelligence’ and ‘Artificial General Intelligence’ with perpetual integrations through ‘Deep Learning’ and ‘Machine Learning’, which are early adaptive to ‘Artificial Narrow Intelligence’ — a cross examination of hypothetical paranoid that is gripping humanity in modern history. The potentiality of a highly sensitive humanoid and sanctification to complete consciousness at par may not be a near probability, but social engineering through the early stages in life may indoctrinate biological senses to a much lower level of ascendancy to Artificial Narrow Intelligence — with furtherance in swindling advancement in processes may reach to a pseudo-Artificial Intelligence {i}. There are no convincing answers to the discoveries from ancient scriptures about the consciousness of archetypal humans against an anticipated replication of a fulfilling Artificial Intelligence {ii}. Human use of lexicon has been the focal of automata for the past few years and the genesis for knowledge, and with the divergence of languages and dialects, scores of dictionaries and tools that perform bidirectional voice and text — contextual services are already influencing the lives, and appeasement to selective humanly incidentals is widely sustainable today {iii}. Synthesizing and harmonizing a pretentious labyrinthine gizmo is the center of human anxiety, but only evaluative research could corroborate that tantamount to genetic consciousness.

  • Research Article
  • 10.3390/pr13051413
Artificial General Intelligence (AGI) Applications and Prospect in Oil and Gas Reservoir Development
  • May 6, 2025
  • Processes
  • Jiulong Wang + 3 more

The cornerstone of the global economy, oil and gas reservoir development, faces numerous challenges such as resource depletion, operational inefficiencies, safety concerns, and environmental impacts. In recent years, the integration of artificial intelligence (AI), particularly artificial general intelligence (AGI), has gained significant attention for its potential to address these challenges. This review explores the current state of AGI applications in the oil and gas sector, focusing on key areas such as data analysis, optimized decision and knowledge management, etc. AGIs, leveraging vast datasets and advanced retrieval-augmented generation (RAG) capabilities, have demonstrated remarkable success in automating data-driven decision-making processes, enhancing predictive analytics, and optimizing operational workflows. In exploration, AGIs assist in interpreting seismic data and geophysical surveys, providing insights into subsurface reservoirs with higher accuracy. During production, AGIs enable real-time analysis of operational data, predicting equipment failures, optimizing drilling parameters, and increasing production efficiency. Despite the promising applications, several challenges remain, including data quality, model interpretability, and the need for high-performance computing resources. This paper also discusses the future prospects of AGI in oil and gas reservoir development, highlighting the potential for multi-modal AI systems, which combine textual, numerical, and visual data to further enhance decision-making processes. In conclusion, AGIs have the potential to revolutionize oil and gas reservoir development by driving automation, enhancing operational efficiency, and improving safety. However, overcoming existing technical and organizational challenges will be essential for realizing the full potential of AI in this sector.

  • Book Chapter
  • Cite Count Icon 6
  • 10.1007/978-94-6265-523-2_26
Regulating Artificial General Intelligence (AGI)
  • Jan 1, 2022
  • Tobias Mahler

This chapter discusses whether on-going EU policymaking on AI is relevant for Artificial General Intelligence (AGI) and what it would mean to potentially regulate it in the future. AGI is typically contrasted with narrow Artificial Intelligence (AI), which excels only within a specific given context. Although many researchers are working on AGI, there is uncertainty about the feasibility of developing it. If achieved, AGI could have cognitive capabilities similar to or beyond those of humans and may be able to perform a broad range of tasks. There are concerns that such AGI could undergo recursive circles of self-improvement, potentially leading to superintelligence. With such capabilities, superintelligent AGI could be a significant power factor in society. However, dystopian superintelligence scenarios are highly controversial and uncertain, so regulating existing narrow AI should be a priority.Keywordsartificial general intelligenceregulationrisk managementexistential risksafetyEuropean Unionlaw

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.