Abstract

This dialogue is from an early scene in the 2014 film Ex Machina, in which Nathan has invited Caleb to determine whether Nathan has succeeded in creating artificial intelligence.1 The achievement of powerful artificial general intelligence has long held a grip on our imagination not only for its exciting as well as worrisome possibilities, but also for its suggestion of a new, uncharted era for humanity. In opening his 2021 BBC Reith Lectures, titled “Living with Artificial Intelligence,” Stuart Russell states that “the eventual emergence of general-purpose artificial intelligence [will be] the biggest event in human history.”2Over the last decade, a rapid succession of impressive results has brought wider public attention to the possibilities of powerful artificial intelligence. In machine vision, researchers demonstrated systems that could recognize objects as well as, if not better than, humans in some situations. Then came the games. Complex games of strategy have long been associated with superior intelligence, and so when AI systems beat the best human players at chess, Atari games, Go, shogi, StarCraft, and Dota, the world took notice. It was not just that Als beat humans (although that was astounding when it first happened), but the escalating progression of how they did it: initially by learning from expert human play, then from self-play, then by teaching themselves the principles of the games from the ground up, eventually yielding single systems that could learn, play, and win at several structurally different games, hinting at the possibility of generally intelligent systems.3Speech recognition and natural language processing have also seen rapid and headline-grabbing advances. Most impressive has been the emergence recently of large language models capable of generating human-like outputs. Progress in language is of particular significance given the role language has always played in human notions of intelligence, reasoning, and understanding. While the advances mentioned thus far may seem abstract, those in driverless cars and robots have been more tangible given their embodied and often biomorphic forms. Demonstrations of such embodied systems exhibiting increasingly complex and autonomous behaviors in our physical world have captured public attention.Also in the headlines have been results in various branches of science in which AI and its related techniques have been used as tools to advance research from materials and environmental sciences to high energy physics and astronomy.4 A few highlights, such as the spectacular results on the fifty-year-old protein-folding problem by AlphaFold, suggest the possibility that AI could soon help tackle science's hardest problems, such as in health and the life sciences.5While the headlines tend to feature results and demonstrations of a future to come, AI and its associated technologies are already here and pervade our daily lives more than many realize. Examples include recommendation systems, search, language translators - now covering more than one hundred languages - facial recognition, speech to text (and back), digital assistants, chatbots for customer service, fraud detection, decision support systems, energy management systems, and tools for scientific research, to name a few. In all these examples and others, AI-related techniques have become components of other software and hardware systems as methods for learning from and incorporating messy real-world inputs into inferences, predictions, and, in some cases, actions. As director of the Future of Humanity Institute at the University of Oxford, Nick Bostrom noted back in 2006, “A lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.”6As the scope, use, and usefulness of these systems have grown for individual users, researchers in various fields, companies and other types of organizations, and governments, so too have concerns when the systems have not worked well (such as bias in facial recognition systems), or have been misused (as in deepfakes), or have resulted in harms to some (in predicting crime, for example), or have been associated with accidents (such as fatalities from self-driving cars).7Dædalus last devoted a volume to the topic of artificial intelligence in 1988, with contributions from several of the founders of the field, among others. Much of that issue was concerned with questions of whether research in AI was making progress, of whether AI was at a turning point, and of its foundations, mathematical, technical, and philosophical-with much disagreement. However, in that volume there was also a recognition, or perhaps a rediscovery, of an alternative path toward AI - the connectionist learning approach and the notion of neural nets-and a burgeoning optimism for this approach's potential. Since the 1960s, the learning approach had been relegated to the fringes in favor of the symbolic formalism for representing the world, our knowledge of it, and how machines can reason about it. Yet no essay captured some of the mood at the time better than Hilary Putnam's “Much Ado About Not Very Much.” Putnam questioned the Dædalus issue itself: “Why a whole issue of Dædalus? Why don't we wait until AI achieves something and then have an issue?” He concluded:This volume of Dædalus is indeed the first since 1988 to be devoted to artificial intelligence. This volume does not rehash the same debates; much else has happened since, mostly as a result of the success of the machine learning approach that was being rediscovered and reimagined, as discussed in the 1988 volume. This issue aims to capture where we are in AI's development and how its growing uses impact society. The themes and concerns herein are colored by my own involvement with AI. Besides the television, films, and books that I grew up with, my interest in AI began in earnest in 1989 when, as an undergraduate at the University of Zimbabwe, I undertook a research project to model and train a neural network.9 I went on to do research on AI and robotics at Oxford. Over the years, I have been involved with researchers in academia and labs developing AI systems, studying AI's impact on the economy, tracking AI's progress, and working with others in business, policy, and labor grappling with its opportunities and challenges for society.10The authors of the twenty-five essays in this volume range from AI scientists and technologists at the frontier of many of AI's developments to social scientists at the forefront of analyzing AI's impacts on society. The volume is organized into ten sections. Half of the sections are focused on AI's development, the other half on its intersections with various aspects of society. In addition to the diversity in their topics, expertise, and vantage points, the authors bring a range of views on the possibilities, benefits, and concerns for society. I am grateful to the authors for accepting my invitation to write these essays.Before proceeding further, it may be useful to say what we mean by artificial intelligence. The headlines and increasing pervasiveness of AI and its associated technologies have led to some conflation and confusion about what exactly counts as AI. This has not been helped by the current trend-among researchers in science and the humanities, startups, established companies, and even governments-to associate anything involving not only machine learning, but data science, algorithms, robots, and automation of all sorts with AI. This could simply reflect the hype now associated with AI, but it could also be an acknowledgment of the success of the current wave of AI and its related techniques and their wide-ranging use and usefulness. I think both are true; but it has not always been like this. In the period now referred to as the AI winter, during which progress in AI did not live up to expectations, there was a reticence to associate most of what we now call AI with AI.Two types of definitions are typically given for AI. The first are those that suggest that it is the ability to artificially do what intelligent beings, usually human, can do. For example, artificial intelligence is:The human abilities invoked in such definitions include visual perception, speech recognition, the capacity to reason, solve problems, discover meaning, generalize, and learn from experience. Definitions of this type are considered by some to be limiting in their human-centricity as to what counts as intelligence and in the benchmarks for success they set for the development of AI (more on this later). The second type of definitions try to be free of human-centricity and define an intelligent agent or system, whatever its origin, makeup, or method, as:This type of definition also suggests the pursuit of goals, which could be given to the system, self-generated, or learned.13 That both types of definitions are employed throughout this volume yields insights of its own.These definitional distinctions notwithstanding, the term AI, much to the chagrin of some in the field, has come to be what cognitive and computer scientist Marvin Minsky called a “suitcase word.”14 It is packed variously, depending on who you ask, with approaches for achieving intelligence, including those based on logic, probability, information and control theory, neural networks, and various other learning, inference, and planning methods, as well as their instantiations in software, hardware, and, in the case of embodied intelligence, systems that can perceive, move, and manipulate objects.Three questions cut through the discussions in this volume: 1) Where are we in AI's development? 2) What opportunities and challenges does AI pose for society? 3) How much about AI is really about us?Notions of intelligent machines date all the way back to antiquity.15 Philosophers, too, among them Hobbes, Leibnitz, and Descartes, have been dreaming about AI for a long time; Daniel Dennett suggests that Descartes may have even anticipated the Turing Test.16 The idea of computation-based machine intelligence traces to Alan Turing's invention of the universal Turing machine in the 1930s, and to the ideas of several of his contemporaries in the mid-twentieth century. But the birth of artificial intelligence as we know it and the use of the term is generally attributed to the now famed Dartmouth summer workshop of 1956. The workshop was the result of a proposal for a two-month summer project by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon whereby “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”17In their respective contributions to this volume, “From So Simple a Beginning: Species of Artificial Intelligence” and “If We Succeed,” and in different but complementary ways, Nigel Shadbolt and Stuart Russell chart the key ideas and developments in AI, its periods of excitement as well as the aforementioned AI winters. The current AI spring has been underway since the 1990s, with headline-grabbing breakthroughs appearing in rapid succession over the last ten years or so: a period that Jeffrey Dean describes in the title of his essay as a “golden decade,” not only for the pace of AI development but also its use in a wide range of sectors of society, as well as areas of scientific research.18 This period is best characterized by the approach to achieve artificial intelligence through learning from experience, and by the success of neural networks, deep learning, and reinforcement learning, together with methods from probability theory, as ways for machines to learn.19A brief history may be useful here: In the 1950s, there were two dominant visions of how to achieve machine intelligence. One vision was to use computers to create a logic and symbolic representation of the world and our knowledge of it and, from there, create systems that could reason about the world, thus exhibiting intelligence akin to the mind. This vision was most espoused by Allen Newell and Hebert Simon, along with Marvin Minsky and others. Closely associated with it was the “heuristic search” approach that supposed intelligence was essentially a problem of exploring a space of possibilities for answers. The second vision was inspired by the brain, rather than the mind, and sought to achieve intelligence by learning. In what became known as the connectionist approach, units called perceptrons were connected in ways inspired by the connection of neurons in the brain. At the time, this approach was most associated with Frank Rosenblatt. While there was initial excitement about both visions, the first came to dominate, and did so for decades, with some successes, including so-called expert systems.Not only did this approach benefit from championing by its advocates and plentiful funding, it came with the suggested weight of a long intellectual tradition-exemplified by Descartes, Boole, Frege, Russell, and Church, among others-that sought to manipulate symbols and to formalize and axiomatize knowledge and reasoning. It was only in the late 1980s that interest began to grow again in the second vision, largely through the work of David Rumelhart, Geoffrey Hinton, James McClelland, and others. The history of these two visions and the associated philosophical ideas are discussed in Hubert Dreyfus and Stuart Dreyfus's 1988 Dædalus essay “Making a Mind Versus Modeling the Brain: Artificial Intelligence Back at a Branchpoint.”20 Since then, the approach to intelligence based on learning, the use of statistical methods, back-propagation, and training (supervised and unsupervised) has come to characterize the current dominant approach.Kevin Scott, in his essay “I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale,” reminds us of the work of Ray Solomonoff and others linking information and probability theory with the idea of machines that can not only learn, but compress and potentially generalize what they learn, and the emerging realization of this in the systems now being built and those to come. The success of the machine learning approach has benefited from the boon in the availability of data to train the algorithms thanks to the growth in the use of the Internet and other applications and services. In research, the data explosion has been the result of new scientific instruments and observation platforms and data-generating breakthroughs, for example, in astronomy and in genomics. Equally important has been the co-evolution of the software and hardware used, especially chip architectures better suited to the parallel computations involved in data- and compute-intensive neural networks and other machine learning approaches, as Dean discusses.Several authors delve into progress in key subfields of AI.21 In their essay, “Searching for Computer Vision North Stars,” Fei-Fei Li and Ranjay Krishna chart developments in machine vision and the creation of standard data sets such as ImageNet that could be used for benchmarking performance. In their respective essays “Human Language Understanding & Reasoning” and “The Curious Case of Commonsense Intelligence,” Chris Manning and Yejin Choi discuss different eras and ideas in natural language processing, including the recent emergence of large language models comprising hundreds of billions of parameters and that use transformer architectures and self-supervised learning on vast amounts of data.22 The resulting pretrained models are impressive in their capacity to take natural language prompts for which they have not been trained specifically and generate human-like outputs, not only in natural language, but also images, software code, and more, as Mira Murati discusses and illustrates in “Language & Coding Creativity.” Some have started to refer to these large language models as foundational models in that once they are trained, they are adaptable to a wide range of tasks and outputs.23 But despite their unexpected performance, these large language models are still early in their development and have many shortcomings and limitations that are highlighted in this volume and elsewhere, including by some of their developers.24In “The Machines from Our Future,” Daniela Rus discusses the progress in robotic systems, including advances in the underlying technologies, as well as in their integrated design that enables them to operate in the physical world. She highlights the limitations in the “industrial” approaches used thus far and suggests new ways of conceptualizing robots that draw on insights from biological systems. In robotics, as in AI more generally, there has always been a tension as to whether to copy or simply draw inspiration from how humans and other biological organisms achieve intelligent behavior. Elsewhere, AI researcher Demis Hassabis and colleagues have explored how neuroscience and AI learn from and inspire each other, although so far more in one direction than the other, as Alexis Baria and Keith Cross have suggested.25Despite the success of the current approaches to AI, there are still many shortcomings and limitations, as well as conceptually hard problems in AI.26 It is useful to distinguish on one hand problematic shortcomings, such as when AI does not perform as intended or safely, or produces biased or toxic outputs that can lead to harm, or when it impinges on privacy, or generates false information about the world, or when it has characteristics such as lack of explainability, all of which can lead to a loss of public trust. These shortcomings have rightly captured the attention of the wider public and regulatory bodies, as well as researchers, among whom there is an increased focus on technical AI and ethics issues.27 In recent years, there has been a flurry of efforts to develop principles and approaches to responsible AI, as well as bodies involving industry and academia, such as the Partnership on AI, that aim to share best practices.28 Another important shortcoming has been the significant lack of diversity-especially with respect to gender and race - in the people researching and developing AI in both industry and academia, as has been well documented in recent years.29 This is an important gap in its own right, but also with respect to the characteristics of the resulting AI and, consequently, in its intersections with society more broadly.On the other hand, there are limitations and hard problems associated with the things that AI is not yet capable of that, if solved, could lead to more powerful, more capable, or more general AI. In their Turing Lecture, deep learning pioneers Yoshua Bengio, Yann LeCun, and Geoffrey Hinton took stock of where deep learning stands and highlighted its current limitations, such as the difficulties with out-of-distribution generalization.30 In the case of natural language processing, Manning and Choi highlight the hard challenges in reasoning and commonsense understanding, despite the surprising performance of large language models. Elsewhere, computational linguists Emily Bender and Alexander Koller have challenged the notion that large language models do anything resembling understanding, learning, or meaning.31 In “Multi-Agent Systems: Technical & Ethical Challenges of Functioning in a Mixed Group,” Kobi Gal and Barbara Grosz discuss the hard problems in multi-agent systems, highlighting the conceptual difficulties-such as how to reason about other agents, their belief systems, and intentionality-as well as ethical challenges in both cooperative and competitive settings, especially when the agents include both humans and machines. Elsewhere, Allan Dafoe and others provide a useful overview of the open problems in cooperative AI.32 Indeed, there is a growing sense among many that we do not have adequate theories for the sociotechnical embedding of AI systems, especially as they become more capable and the scope of societal use expands.And although AI and its related techniques are proving to be powerful tools for research in science, as examples in this volume and elsewhere illustrate-including recent examples in which embedded AI capabilities not only help evaluate results but also steer experiments by going beyond heuristics-based experimental design and become what some have termed “self-driving laboratories”33-getting AI to understand science and mathematics and to theorize and develop novel concepts remain grand challenges for AI.34 Indeed the possibility that more powerful AI could lead to new discoveries in science, as well as enable game-changing progress in some of humanities greatest challenges and opportunities, has long been a key motivation for many at the frontier of AI research to build more capable systems.Beyond the particulars of each subfield of AI, the list of more general hard problems that continue to limit the possibility of more capable AI includes one-shot learning, cross-domain generalizations, causal reasoning, grounding, complexities of timescales and memory, and meta-cognition.35 Consideration of these and other hard problems that could lead to more capable systems raises the question of whether current approaches-mostly characterized by deep learning, the building of larger and larger and more foundational and multimodal models, and reinforcement learning-are sufficient, or whether entirely different conceptual approaches are needed in addition, such as neuroscience-inspired cognitive agent approaches or semantic representations or reasoning based on logic and probability theory, to name a few. On whether and what kind of additional approaches might be needed, the AI community is divided, but many believe the current approaches36 along with further evolution of compute and learning architectures have yet to reach their limits.37The debate about the sufficiency of the current approaches is closely associated with the question of whether artificial general intelligence can be achieved, and if so, how and when. Artificial general intelligence (AGI) is defined in distinction to what is sometimes called narrow AI: that is, AI developed and fine-tuned for specific tasks and goals, such as playing chess. The development of AGI, on the other hand, aims for more powerful AI - at least as powerful as humans-that is generally applicable to any problem or situation and, in some conceptions, includes the capacity to evolve and improve itself, as well as set and evolve its own goals and preferences. Though the question of whether, how, and when AGI will be achieved is a matter for debate, most agree that its achievement would have profound implications-beneficial and worrisome-for humanity, as is often depicted in popular books38 and films such as 2001: A Space Odyssey through Terminator and The Matrix to Ex Machina and Her. Whether it is imminent or not, there is growing agreement among many at the frontier of AI research that we should prepare for the possibility of powerful AGI with respect to safety and control, alignment and compatibility with humans, its governance and use, and the possibility that multiple varieties of AGI could emerge, and that we should factor these considerations into how we approach the development of AGI.Most of the investment, research and development, and commercial activity in AI today is of the narrow AI variety and in its numerous forms: what Nigel Shadbolt terms the speciation of AI. This is hardly surprising given the scope for useful and commercial applications and the potential for economic gains in multiple sectors of the economy.39 However, a few organizations have made the development of AGI their primary goal. Among the most well-known of these are DeepMind and OpenAI, each of which has demonstrated results of increasing generality, though still a long way from AGI.Perhaps the most widely discussed societal impact of AI and automation is on jobs and the future of work. This is not new. In 1964, in the wake of the era's excitement about AI and automation, and concerns about their impact on jobs, President Lyndon Johnson empaneled a National Commission on Technology, Automation, and Economic Progress.40 Among the commission's conclusions was that such technologies were important for economic growth and prosperity and “the basic fact that technology destroys jobs, but not work.” Most recent studies of this effect, including those I have been involved in, have reached similar conclusions and that over time, more jobs are gained than are lost. These studies highlight that it is the sectoral and occupational transitions, the skill and wage effects-not the existence of jobs broadly-that will present the greatest challenges.41 In their essay “Automation, AI & Work,” Laura Tyson and John Zysman discuss these implications for work and workers. Michael Spence goes further, in “Automation, Augmentation, Value Creation & the Distribution of Income & Wealth,” to discuss the distributional issues with respect to income and wealth within and between countries, as well as the societal opportunities that are created, especially in developing countries. In “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence,” Erik Brynjolfsson discusses how the use of human benchmarks in the development of AI runs the risk of AI that substitutes for, rather than complements, human labor. He concludes that the direction AI's development will take in this regard, and resulting outcomes for work, will depend on the incentives for researchers, companies, and governments.42Still, a concern remains that the conclusion that more jobs will be created than lost draws too much from patterns of the past and does not look far enough into the future and at what AI will be capable of. The arguments for why AI could break from past patterns of technology-driven change include: first, that unlike in the past, technological change is happening faster and labor markets (including workers) and societal systems' ability to adapt are slow and mismatched; and second, that, until now, automation has mostly mechanized physical and routine tasks, but that going forward, AI will be taking on more cognitive and nonroutine tasks, creative tasks, tasks based on tacit knowledge, and, if early examples are any indication, even socioempathic tasks are not out of the question.43 In other words, “There are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until-in a visible future-the range of problems they can handle will be coextensive with the range to which the human mind has been applied.” This was Herbert Simon and Allen Newell in 1957.44Acknowledging that this time could be different usually elicits two responses: First, that new labor markets will emerge in which people will value things done by other humans for their own sake, even when machines may be capable of doing these things as well as or even better than humans. The other response is that AI will create so much wealth and material abundance, all without the need for human labor, and the scale of abundance will be sufficient to provide for everyone's needs. And when that happens, humanity will face the challenge that Keynes once framed: “For the first time since his creation man will be faced with his real, his permanent problem-how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.”45 However, most researchers believe that we are not close to a future in which the majority of humanity will face Keynes's challenge, and that until then, there are other AI- and automation-related effects that must be addressed in the labor markets now and in the near future, such as inequality and other wage effects, education, skilling, and how humans work alongside increasingly capable machines-issues that Laura Tyson and John Zysman, Michael Spence, and Erik Brynjolfsson discuss in this volume.Jobs are not the only aspect of the economy impacted by AI. Russell provides a directional estimate of the potentially huge economic bounty from artificial general intelligence, once fully realized: a global GDP of $750 trillion, or ten times today's global GDP. But even before we get to fully realized general-purpose AI, the commercial opportunities for companies and, for countries, the potential productivity gains and economic growth as well as economic competitiveness from narrow AI and its related technologies are more than sufficient to ensure intense pursuit and competition by companies and countries in the development, deployment, and use of AI. At the national level, while many believe the United States is ahead, it is generally acknowledged that China is fast becoming a major player in AI, as evidenced by its growth in AI research, infrastructure, and ecosystems, as highlighted in several reports.46 Such competition will likely have market structure effects for companies and countries, given the characteristics of such technologies as discussed by Eric Schmidt, Spence, and others elsewhere.47 Moreover, the competitive dynamics may get in the way of responsible approaches to AI and issues requiring collective action (such as safety) between competitors, whether they are companies or countries, as Amanda Askell, Miles Brundage, and Gillian Hadfield have highlighted.48Nations have reasons beyond the economic to want to lead in AI. The role of AI in national security-in surveillance, signals intelligence, cyber operations, defense systems, battle-space superiority, autonomous weapons, even disinformation and other forms of sociopolitical warfare-is increasingly clear. In “AI, Great Power Competition

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call