Counter-Closure Principles in the Age of Complex Software Systems: A Generalized Challenge from AI

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACT The rapid advancement of artificial intelligence has brought a host of new epistemological challenges. One particularly pressing question is whether, and to what extent, AI systems can serve as sources of epistemic goods. Can they effectively transmit knowledge or understanding? And if they do not possess these epistemic goods themselves, can they still generate them for human users? This article explores these questions by critically examining the constraints posed by counter-closure principles – epistemological principles that allegedly cast doubt on the epistemic potential of AI. By addressing these principles, I aim to lay the groundwork for a systematic inquiry into the social epistemology of AI.

Similar Papers
  • Research Article
  • Cite Count Icon 23
  • 10.2139/ssrn.2931828
When Artificial Intelligence Systems Produce Inventions: The 3A Era and an Alternative Model for Patent Law
  • Mar 13, 2017
  • SSRN Electronic Journal
  • Shlomit Yanisky-Ravid + 1 more

Currently, robots, Artificial Intelligence and machine learning systems (hereinafter referred to collectively as “AI” or “AI systems”) can create inventions, which, had they been created by humans, would be eligible for patent protection. This study addresses the patentability of these inventions created by AI systems. We argue that traditional patent law has become outdated, inapplicable and irrelevant with respect to inventions created by AI systems. We call on policy makers to rethink current patent law governing AI systems and replace it with tools more applicable to the new (3A) era of advanced automated and autonomous AI systems. Our argument is based on three pillars: the features of AI systems, the Multiplayer Model and the irrelevance of theoretical justifications concerning intellectual property. In order to fully convey the ability of AI systems to create inventions, the article explains, for one the first times in the legal literature, what AI systems are, how they work and what makes them (so) intelligent. This understanding is crucial to any further discourse about AI systems. We identify eight crucial features of AI systems they are: (1) creative; (2) unpredictable; (3) independent and autonomous; (4) rational; (5) evolving; (6) capable of data collection and communication; (7) efficient and accurate; and they (8) freely choose among alternative options. We argue that, due to these features, AI systems are capable of independently developing inventions which, had they been created by humans, would be patentable (and able to registered as patents). The traditional approach to patent law in which policy makers seek to identify the human inventor behind the patent is, therefore, no longer relevant. We are facing a new era of machines “acting” independently, with no human being behind the inventive act itself. The second pillar of our argument is the Multiplayer Model, which characterizes the long process through which inventions are created by AI systems. The Multiplayer Model, which is also almost absent in the current legal publications, describes the multiple participants and stakeholders, both overlapping and independent, involved in the process, including software programmers, data and feedback suppliers, trainers, system owners and operators, employers, the public and the government. The model conveys that the efforts of traditional patent law to identify a single inventor of these products and processes are no longer applicable. The third pillar of our argument is the irrelevancy of theoretical justifications such as personality and inventiveness/efficiency to inventions created by AI systems. In contrast to other scholars, we argue that traditional patent law is irrelevant and inapplicable to these situations, that these inventions should not be patentable at all and that other tools can achieve the same ends while promoting innovation and public disclosure. These other, non-patent incentives include commercial tools such as electronic and cyber controls over inventions, first-mover market advantages and license agreements. This proposal serves a gatekeeping function and is superior to a revision of the non-obviousness standard used by other scholars to afford patent protection to inventions by AI systems. In maintaining the traditional patents system by hunting for a “real” human inventor, policy makers exhibit a misunderstanding of advanced technology and AI systems features. We conclude with a discussion of the implications of our analysis for different legal regimes, such as tort, contracts and even criminal law.

  • Research Article
  • Cite Count Icon 279
  • 10.2139/ssrn.3064761
Accountability of AI Under the Law: The Role of Explanation
  • Jan 1, 2017
  • SSRN Electronic Journal
  • Finale Doshi-Velez + 9 more

The ubiquity of systems using artificial intelligence or has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before|applications range from clinical decision support to autonomous driving and predictive policing. That said, common sense reasoning [McCarthy, 1960] remains one of the holy grails of AI, and there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems [Bostrom, 2003, Amodei et al., 2016, Sculley et al., 2014]. There are many ways to hold AI systems accountable. In this work, we focus on one: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation [Goodman and Flaxman, 2016, Wachter et al., 2017], and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. Good choices about when to demand explanation can help prevent negative consequences from AI systems, while poor choices may not only fail to hold AI systems accountable but also hamper the development of much-needed beneficial AI systems. Below, we briefly review current societal, moral, and legal norms around explanation, and then focus on the different contexts under which explanation is currently required under the law. We find that there exists great variation around when explanation is demanded, but there also exists important consistencies: when demanding explanation from humans, what we typically want to know is how and whether certain input factors affected the final decision or outcome. These consistencies allow us to list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans under the law. Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should often be technically feasible but may sometimes be practically onerous|there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard.

  • Research Article
  • Cite Count Icon 16
  • 10.1162/daed_e_01897
Getting AI Right: Introductory Notes on AI & Society
  • May 1, 2022
  • Daedalus
  • James Manyika

This dialogue is from an early scene in the 2014 film Ex Machina, in which Nathan has invited Caleb to determine whether Nathan has succeeded in creating artificial intelligence.1 The achievement of powerful artificial general intelligence has long held a grip on our imagination not only for its exciting as well as worrisome possibilities, but also for its suggestion of a new, uncharted era for humanity. In opening his 2021 BBC Reith Lectures, titled "Living with Artificial Intelligence," Stuart Russell states that "the eventual emergence of general-purpose artificial intelligence [will be] the biggest event in human history."2Over the last decade, a rapid succession of impressive results has brought wider public attention to the possibilities of powerful artificial intelligence. In machine vision, researchers demonstrated systems that could recognize objects as well as, if not better than, humans in some situations. Then came the games. Complex games of strategy have long been associated with superior intelligence, and so when AI systems beat the best human players at chess, Atari games, Go, shogi, StarCraft, and Dota, the world took notice. It was not just that Als beat humans (although that was astounding when it first happened), but the escalating progression of how they did it: initially by learning from expert human play, then from self-play, then by teaching themselves the principles of the games from the ground up, eventually yielding single systems that could learn, play, and win at several structurally different games, hinting at the possibility of generally intelligent systems.3Speech recognition and natural language processing have also seen rapid and headline-grabbing advances. Most impressive has been the emergence recently of large language models capable of generating human-like outputs. Progress in language is of particular significance given the role language has always played in human notions of intelligence, reasoning, and understanding. While the advances mentioned thus far may seem abstract, those in driverless cars and robots have been more tangible given their embodied and often biomorphic forms. Demonstrations of such embodied systems exhibiting increasingly complex and autonomous behaviors in our physical world have captured public attention.Also in the headlines have been results in various branches of science in which AI and its related techniques have been used as tools to advance research from materials and environmental sciences to high energy physics and astronomy.4 A few highlights, such as the spectacular results on the fifty-year-old protein-folding problem by AlphaFold, suggest the possibility that AI could soon help tackle science's hardest problems, such as in health and the life sciences.5While the headlines tend to feature results and demonstrations of a future to come, AI and its associated technologies are already here and pervade our daily lives more than many realize. Examples include recommendation systems, search, language translators - now covering more than one hundred languages - facial recognition, speech to text (and back), digital assistants, chatbots for customer service, fraud detection, decision support systems, energy management systems, and tools for scientific research, to name a few. In all these examples and others, AI-related techniques have become components of other software and hardware systems as methods for learning from and incorporating messy real-world inputs into inferences, predictions, and, in some cases, actions. As director of the Future of Humanity Institute at the University of Oxford, Nick Bostrom noted back in 2006, "A lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."6As the scope, use, and usefulness of these systems have grown for individual users, researchers in various fields, companies and other types of organizations, and governments, so too have concerns when the systems have not worked well (such as bias in facial recognition systems), or have been misused (as in deepfakes), or have resulted in harms to some (in predicting crime, for example), or have been associated with accidents (such as fatalities from self-driving cars).7Dædalus last devoted a volume to the topic of artificial intelligence in 1988, with contributions from several of the founders of the field, among others. Much of that issue was concerned with questions of whether research in AI was making progress, of whether AI was at a turning point, and of its foundations, mathematical, technical, and philosophical-with much disagreement. However, in that volume there was also a recognition, or perhaps a rediscovery, of an alternative path toward AI - the connectionist learning approach and the notion of neural nets-and a burgeoning optimism for this approach's potential. Since the 1960s, the learning approach had been relegated to the fringes in favor of the symbolic formalism for representing the world, our knowledge of it, and how machines can reason about it. Yet no essay captured some of the mood at the time better than Hilary Putnam's "Much Ado About Not Very Much." Putnam questioned the Dædalus issue itself: "Why a whole issue of Dædalus? Why don't we wait until AI achieves something and then have an issue?" He concluded:This volume of Dædalus is indeed the first since 1988 to be devoted to artificial intelligence. This volume does not rehash the same debates; much else has happened since, mostly as a result of the success of the machine learning approach that was being rediscovered and reimagined, as discussed in the 1988 volume. This issue aims to capture where we are in AI's development and how its growing uses impact society. The themes and concerns herein are colored by my own involvement with AI. Besides the television, films, and books that I grew up with, my interest in AI began in earnest in 1989 when, as an undergraduate at the University of Zimbabwe, I undertook a research project to model and train a neural network.9 I went on to do research on AI and robotics at Oxford. Over the years, I have been involved with researchers in academia and labs developing AI systems, studying AI's impact on the economy, tracking AI's progress, and working with others in business, policy, and labor grappling with its opportunities and challenges for society.10The authors of the twenty-five essays in this volume range from AI scientists and technologists at the frontier of many of AI's developments to social scientists at the forefront of analyzing AI's impacts on society. The volume is organized into ten sections. Half of the sections are focused on AI's development, the other half on its intersections with various aspects of society. In addition to the diversity in their topics, expertise, and vantage points, the authors bring a range of views on the possibilities, benefits, and concerns for society. I am grateful to the authors for accepting my invitation to write these essays.Before proceeding further, it may be useful to say what we mean by artificial intelligence. The headlines and increasing pervasiveness of AI and its associated technologies have led to some conflation and confusion about what exactly counts as AI. This has not been helped by the current trend-among researchers in science and the humanities, startups, established companies, and even governments-to associate anything involving not only machine learning, but data science, algorithms, robots, and automation of all sorts with AI. This could simply reflect the hype now associated with AI, but it could also be an acknowledgment of the success of the current wave of AI and its related techniques and their wide-ranging use and usefulness. I think both are true; but it has not always been like this. In the period now referred to as the AI winter, during which progress in AI did not live up to expectations, there was a reticence to associate most of what we now call AI with AI.Two types of definitions are typically given for AI. The first are those that suggest that it is the ability to artificially do what intelligent beings, usually human, can do. For example, artificial intelligence is:The human abilities invoked in such definitions include visual perception, speech recognition, the capacity to reason, solve problems, discover meaning, generalize, and learn from experience. Definitions of this type are considered by some to be limiting in their human-centricity as to what counts as intelligence and in the benchmarks for success they set for the development of AI (more on this later). The second type of definitions try to be free of human-centricity and define an intelligent agent or system, whatever its origin, makeup, or method, as:This type of definition also suggests the pursuit of goals, which could be given to the system, self-generated, or learned.13 That both types of definitions are employed throughout this volume yields insights of its own.These definitional distinctions notwithstanding, the term AI, much to the chagrin of some in the field, has come to be what cognitive and computer scientist Marvin Minsky called a "suitcase word."14 It is packed variously, depending on who you ask, with approaches for achieving intelligence, including those based on logic, probability, information and control theory, neural networks, and various other learning, inference, and planning methods, as well as their instantiations in software, hardware, and, in the case of embodied intelligence, systems that can perceive, move, and manipulate objects.Three questions cut through the discussions in this volume: 1) Where are we in AI's development? 2) What opportunities and challenges does AI pose for society? 3) How much about AI is really about us?Notions of intelligent machines date all the way back to antiquity.15 Philosophers, too, among them Hobbes, Leibnitz, and Descartes, have been dreaming about AI for a long time; Daniel Dennett suggests that Descartes may have even anticipated the Turing Test.16 The idea of computation-based machine intelligence traces to Alan Turing's invention of the universal Turing machine in the 1930s, and to the ideas of several of his contemporaries in the mid-twentieth century. But the birth of artificial intelligence as we know it and the use of the term is generally attributed to the now famed Dartmouth summer workshop of 1956. The workshop was the result of a proposal for a two-month summer project by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon whereby "An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."17In their respective contributions to this volume, "From So Simple a Beginning: Species of Artificial Intelligence" and "If We Succeed," and in different but complementary ways, Nigel Shadbolt and Stuart Russell chart the key ideas and developments in AI, its periods of excitement as well as the aforementioned AI winters. The current AI spring has been underway since the 1990s, with headline-grabbing breakthroughs appearing in rapid succession over the last ten years or so: a period that Jeffrey Dean describes in the title of his essay as a "golden decade," not only for the pace of AI development but also its use in a wide range of sectors of society, as well as areas of scientific research.18 This period is best characterized by the approach to achieve artificial intelligence through learning from experience, and by the success of neural networks, deep learning, and reinforcement learning, together with methods from probability theory, as ways for machines to learn.19A brief history may be useful here: In the 1950s, there were two dominant visions of how to achieve machine intelligence. One vision was to use computers to create a logic and symbolic representation of the world and our knowledge of it and, from there, create systems that could reason about the world, thus exhibiting intelligence akin to the mind. This vision was most espoused by Allen Newell and Hebert Simon, along with Marvin Minsky and others. Closely associated with it was the "heuristic search" approach that supposed intelligence was essentially a problem of exploring a space of possibilities for answers. The second vision was inspired by the brain, rather than the mind, and sought to achieve intelligence by learning. In what became known as the connectionist approach, units called perceptrons were connected in ways inspired by the connection of neurons in the brain. At the time, this approach was most associated with Frank Rosenblatt. While there was initial excitement about both visions, the first came to dominate, and did so for decades, with some successes, including so-called expert systems.Not only did this approach benefit from championing by its advocates and plentiful funding, it came with the suggested weight of a long intellectual tradition-exemplified by Descartes, Boole, Frege, Russell, and Church, among others-that sought to manipulate symbols and to formalize and axiomatize knowledge and reasoning. It was only in the late 1980s that interest began to grow again in the second vision, largely through the work of David Rumelhart, Geoffrey Hinton, James McClelland, and others. The history of these two visions and the associated philosophical ideas are discussed in Hubert Dreyfus and Stuart Dreyfus's 1988 Dædalus essay "Making a Mind Versus Modeling the Brain: Artificial Intelligence Back at a Branchpoint."20 Since then, the approach to intelligence based on learning, the use of statistical methods, back-propagation, and training (supervised and unsupervised) has come to characterize the current dominant approach.Kevin Scott, in his essay "I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale," reminds us of the work of Ray Solomonoff and others linking information and probability theory with the idea of machines that can not only learn, but compress and potentially generalize what they learn, and the emerging realization of this in the systems now being built and those to come. The success of the machine learning approach has benefited from the boon in the availability of data to train the algorithms thanks to the growth in the use of the Internet and other applications and services. In research, the data explosion has been the result of new scientific instruments and observation platforms and data-generating breakthroughs, for example, in astronomy and in genomics. Equally important has been the co-evolution of the software and hardware used, especially chip architectures better suited to the parallel computations involved in data- and compute-intensive neural networks and other machine learning approaches, as Dean discusses.Several authors delve into progress in key subfields of AI.21 In their essay, "Searching for Computer Vision North Stars," Fei-Fei Li and Ranjay Krishna chart developments in machine vision and the creation of standard data sets such as ImageNet that could be used for benchmarking performance. In their respective essays "Human Language Understanding & Reasoning" and "The Curious Case of Commonsense Intelligence," Chris Manning and Yejin Choi discuss different eras and ideas in natural language processing, including the recent emergence of large language models comprising hundreds of billions of parameters and that use transformer architectures and self-supervised learning on vast amounts of data.22 The resulting pretrained models are impressive in their capacity to take natural language prompts for which they have not been trained specifically and generate human-like outputs, not only in natural language, but also images, software code, and more, as Mira Murati discusses and illustrates in "Language & Coding Creativity." Some have started to refer to these large language models as foundational models in that once they are trained, they are adaptable to a wide range of tasks and outputs.23 But despite their unexpected performance, these large language models are still early in their development and have many shortcomings and limitations that are highlighted in this volume and elsewhere, including by some of their developers.24In "The Machines from Our Future," Daniela Rus discusses the progress in robotic systems, including advances in the underlying technologies, as well as in their integrated design that enables them to operate in the physical world. She highlights the limitations in the "industrial" approaches used thus far and suggests new ways of conceptualizing robots that draw on insights from biological systems. In robotics, as in AI more generally, there has always been a tension as to whether to copy or simply draw inspiration from how humans and other biological organisms achieve intelligent behavior. Elsewhere, AI researcher Demis Hassabis and colleagues have explored how neuroscience and AI learn from and inspire each other, although so far more in one than the other, as and have the success of the current approaches to AI, there are still many shortcomings and as well as problems in It is useful to on one such as when AI does not as or or or that can to or when it on or information about the world, or when it has such as of all of which can to a of public shortcomings have captured the attention of the wider public and as well as among there is an on AI and In recent years, there has been a of to principles and approaches to AI, as well as involving and such as the on AI, that to best important has been the of with to and - in the and developing AI in both and as has been well in recent This is an important in its own but also with to the of the resulting AI and, in its intersections with more the other there are limitations and problems associated with the that AI is not capable of if could to more more or more general AI. In their Turing deep learning and Geoffrey took of where deep learning and highlighted its current such as the with In the case of natural language processing, Manning and Choi the challenges in and despite the of large language Elsewhere, and have the notion that large language models do anything learning, or In & of in a and discuss the problems in systems, the as how to reason about other their systems, and well as challenges in both and especially when the include both humans and Elsewhere, and others a useful of the problems in there is a growing among many that we do not have for the of AI systems, especially as they become more capable and the of use although AI and its related techniques are to be powerful tools for research in science, as examples in this volume and recent examples in which AI not only help results but also by design and become what some have AI to science and and to and challenges for the possibility that more powerful AI could to new in science, as well as progress in some of challenges and has long been a key for many at the frontier of AI research to more capable the of each of AI, the of more general problems that to the possibility of more capable AI learning, reasoning, of and and of these and other problems that could to more capable systems the of whether current characterized by deep learning, the of and and more foundational and and reinforcement or whether different approaches are in such as cognitive agent approaches or or based on logic and probability theory, to name a few. whether and what of approaches be the AI is but many the current along with of and learning architectures have to their about the of the current approaches is associated with the of whether artificial general intelligence can be and if how and Artificial general intelligence is in to what is called that AI and for tasks and goals, such as The development of on the other aims for more powerful AI - at as powerful as is generally to problem or and, in some the capacity to and improve as well as set and its own and the of and when will be is a for most that its achievement have and as is often in and such as A through and The to Ex and it is or there is growing among many at the frontier of AI research that we for the possibility of powerful with to and and with humans, its and use, and the possibility that of could and that we these into how we approach the development of of the research and development, and in AI is of the AI and in its what Nigel Shadbolt the of AI. This is given the for useful and applications and the for in sectors of the However, a few have made the development of their the most of these are and each of which has demonstrated results of increasing still a long way from the most discussed impact of AI and automation is on and the future of This is not In in the of the excitement about AI and and concerns about their impact on a on and the was that such technologies were important for growth and and "the that but not Most recent of this including those I have been involved have and that over time, more are than are that it is the and the and the of will the In their essay AI & and John discuss these for work and further, in & the of & to discuss the with to and and as well as the opportunities that are especially in developing In "The Turing The & of Artificial Intelligence," discusses how the use of human benchmarks in the development of AI the of AI that rather than human He that the AI's development will take in this and resulting for will on the for companies, and a that the that more will be than too much from of the and does not far enough into the future and at what AI will be capable The for AI could from of that in the is and labor and ability to are and and until automation has mostly physical and but that AI will be on more cognitive and tasks based on and, if early examples are even tasks are not of the In other are now in the world machines that that learn and that their ability to do these is to a range of problems they can will be with the range to which the human has been This was and Allen Newell in that this time could be different usually two that new labor will in which will by other humans for their own even when machines may be capable of these as well as or even better than The other is that AI will create so much and all without the for human and the of will be to for when that will the that once the first time since his creation will be with his his to use his from how to the which science and interest will have for to live and and However, most researchers that we are not to a future in which the of will and that until then, there are other and that be in the labor now and in the such as and other and how humans work increasingly capable that and John and discuss in this are not the only of the by AI. Russell a of the potentially from artificial general intelligence, once a of or ten But even we to general-purpose AI, the opportunities for companies and, for the and growth as well as from AI and its related technologies are more than to pursuit and by companies and in the development, and use of AI. At the many the is it is generally that is a in AI, as by its growth in AI research, and as highlighted in several will have for companies and given the of such technologies as discussed by and others the may in the way of approaches to AI and (such as whether they are companies or as and have have the to to in AI. The role of AI in intelligence, systems, autonomous even and other of increasingly In &

  • Book Chapter
  • 10.70593/978-81-988918-1-5_5
Building secure artificial intelligence systems: Defending against vulnerabilities in intelligent technologies
  • Jun 6, 2025
  • Abhishek Dodda

Given the increasing capability and applicability of AI systems in sensitive domains within society, we, cyber and information security specialists with a long-standing interest in critical computer systems, must extend our mission to include those systems dedicated to Artificial Intelligence. We must ensure, to the degree feasible, that AI systems function dependably and securely when deployed. After years of pushing back decades of optimism that had located AI systems beyond our field of study, a realistic attitude toward the considerable benefits and, equally, the considerable dangers that AI systems can engender has emerged. While the goal of designing such systems so that they reflect or generate intelligent behavior in a quantifiable way has regained attention, our focus here is on their security. AI systems are vulnerable to a set of attacks that differ on key dimensions from the traditional attacks against conventional computer systems. We refer to this set of attacks as the “AI Security Vulnerability Landscape.” Some of the vulnerabilities of non-AI systems are also present in AI systems, but heightened or modified. In this chapter, we summarize the kinds of vulnerabilities that we feel are most salient. We also consider some new ideas, surprisingly longstanding in some contexts, such as verification of generated behavior. Our particular focus is defensive activities (Huang et al., 2011; Goodfellow et al., 2014; Biggio & Roli, 2018). To keep our focus limited, we restrict our attention predominantly to Machine Learning, the most visible AI activity. Most of the vulnerabilities that we would summarize for AI systems more generally are also the most relevant for Learning Systems. However, the types of intelligent systems that present other forms of weakness are somewhat broader than the kind of supervised or unsupervised learning through repetition, with a focus on generating probability distributions over symbol strings, that presently dominates in practice. For example, the increasingly popular area of Ontology-based Systems for Knowledge Representation and Generation raises different issues than those affecting Learning Systems. Other logical activities, such as planning via deriving deductions, not already covered, also require distinct emphasis (Moosavi-Dezfooli et al., 2016; Papernot et al., 2016).

  • Book Chapter
  • Cite Count Icon 15
  • 10.1007/978-3-030-80847-1_1
Ethics and Regulation of Artificial Intelligence
  • Jan 1, 2021
  • Anthony Wong

Over the last few years, the world has deliberated and developed numerous ethical principles and frameworks. It is the general opinion that the time has arrived to move from principles and to operationalize on the ethical practice of AI. It is now recognized that principles and standards can play a universal harmonizing role for the development of AI-related legal norms across the globe. However, how do we translate and embrace these articulated values, principles and actions to guide Nation States around the world to formulate their regulatory systems, policies or other legal instruments regarding AI? Our regulatory systems have attempted to keep abreast of new technologies by recalibrating and adapting our regulatory frameworks to provide for new opportunities and risks, to confer rights and duties, safety and liability frameworks, and to ensure legal certainty for businesses. These past adaptations have been reactive and sometimes piecemeal, often with artificial delineation on rights and responsibilities and with unintended flow-on consequences. Previously, technologies have been deployed more like tools, but as autonomy and self-learning capabilities increase, robots and intelligent AI systems will feel less and less like machines and tools. There is now a significant difference, because machine learning AI systems have the ability ‘to learn’, adapt their performances and ‘make decisions’ from data and ‘life experiences’. This paper presented at the International Joint Conference on Artificial Intelligence - Pacific Rim International Conference on Artificial Intelligence in 2021 provides brief insights on some selected topical developments in ethical principles and frameworks, our regulatory systems and the current debates on some of the risks and challenges from the use and actions of AI, autonomous and intelligent systems [1].KeywordsAIRobotsAutomationRegulationEthicsLawLiabilityTransparencyExplainabilityData protectionPrivacyLegal personhoodJob transitionEmployment

  • Research Article
  • Cite Count Icon 1
  • 10.5937/pnb25-46935
Primena veštačke inteligencije u savremenom ratovanju
  • Jan 1, 2023
  • Politika nacionalne bezbednosti
  • Milan Miljković + 1 more

Artificial intelligence is making rapid strides, and trying to predict its limits is an uncertain endeavor. While it opens up significant opportunities, it also presents challenges. AI has the potential to greatly enhance military capabilities, acting as a force multiplier. Military applications of AI can confer a competitive edge by expediting decision-making, revolutionizing the decision-making process, and improving command, control, and oversight capabilities. Similar to any groundbreaking technology, AI is poised to spark competition among powerful nations, potentially giving rise to security dilemmas, disrupting conflict predictability, and increasing the risk of escalation. At its core, the pivotal question centers on the interaction between human operators and AI systems. In the realm of strategy, official state documents underscore the strategic significance of AI development and deployment in military endeavors. AI systems are likely to bolster military strategy, especially in forecasting and planning. Nevertheless, the human element in shaping strategy remains paramount, as it relies on instincts, creativity, and values. Nonetheless, there remains a concern that military personnel might excessively rely on AI for decision-making. In terms of military doctrine, the role of AI will likely be limited to assessment and aiding in doctrine revision. Considering that doctrine outlines a state's armed forces' purpose, values, and organizational culture, it is apparent that doctrine will play a pivotal role in defining how a state's military perceives and interacts with AI systems. Artificial intelligence will play a substantial role in military planning, primarily due to its capacity to rapidly and accurately process complex and vast datasets. Even if AI systems are not granted decision-making authority, military planners and commanders may heavily depend on AI analyses and recommendations due to time constraints and the intricacies of wartime scenarios. Consequently, the line between AI that supports decision-making and AI that makes decisions itself could become less distinct. Concerning Rules of Engagement, they serve as a suitable framework for distinguishing the utilization of AI in specific conflicts and missions. In the realm of military orders, AI systems are expected to offer significant support in command and control functions, though they may not be entrusted with issuing orders independently. Nevertheless, practical challenges may arise in distinguishing between orders issued by algorithms and those given by commanders, potentially resulting in de facto AI-driven decision-making, akin to the planning stage. Military structures, standards, and processes are likely to adapt in tandem with technological advancements. It is, therefore, imperative to proactively establish fundamental principles, values, and standards governing AI use, rather than reacting to technological developments, to avert unforeseen or undesirable consequences. Future discussions and research on AI's role in military operations, as well as its integration into strategy, doctrine, operational plans, Rules of Engagement, and orders, should concentrate on the interaction between humans and machines, as this remains the crux of the matter. Striking an appropriate balance between AI's role in military preparation and execution and the effective management of military artificial intelligence is of paramount importance.

  • Conference Article
  • Cite Count Icon 3
  • 10.54941/ahfe1004068
Assessing the Transparency and Explainability of AI Algorithms in Planning and Scheduling tools: A Review of the Literature
  • Jan 1, 2023
  • Sofia Morandini + 9 more

As AI technologies enter our working lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans at work. One critical requirement for such synergistic human-AI interaction is that the AI systems' behavior be explainable to the humans in the loop. The performance of decision-making by artificial intelligence has exceeded the capability of human beings in many specific domains. In the AI decision-making process, the inherent black-box algorithms and opaque system information lead to highly correct but incomprehensible results. The need for explainability of intelligent decision-making is becoming urgent and a transparent process can strengthen trust between humans and machines. The As AI technologies enter our working lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans at work. One critical requirement for such synergistic human-AI interaction is that the AI systems' behavior be explainable to the humans in the loop. The performance of decision-making by artificial intelligence has exceeded the capability of human beings in many specific domains. In the AI decision-making process, the inherent black-box algorithms and opaque system information lead to highly correct but incomprehensible results. The need for explainability of intelligent decision-making is becoming urgent and a transparent process can strengthen trust between humans and machines. The TUPLES project, a three-year Horizon Europe R&I project, aims to bridge this gap by developing AI-based planning and scheduling (P&S) tools using a comprehensive, human-centered approach. TUPLES leverages data-driven and knowledge-based symbolic AI methods to provide scalable, transparent, robust, and secure algorithmic planning and scheduling systems solutions. It adopts a use-case-oriented methodology to ensure practical applicability. Use cases are chosen based on input from industry experts, cutting-edge advances, and manageable risks (e.g., manufacturing, aviation, waste management). The EU guidelines for Trustworthy Artificial Intelligence highlight key requirements such as human agency and oversight, transparency, fairness, societal well-being, and accountability. The Assessment List for Trustworthy Artificial Intelligence (ALTAI) is a practical self-assessment tool for businesses and organizations to evaluate their AI systems. Existing AI-based P&S tools only partially meet these criteria, so innovative AI development approaches are necessary. We conducted a literature review to explore current research on AI algorithms' transparency and explainability in P&S, aiming to identify metrics and recommendations. The findings highlighted the importance of Explainable AI (XAI) in AI design and implementation. XAI addresses the black box problem by making AI systems explainable, meaningful, and accurate. It uses pre-modeling, in-modeling, and post-modeling explainability techniques, relying on psychological concepts of human explanation and interpretation for a human-centered approach. The review pinpoints specific XAI methods and offered evidence to guide the selection of XAI tools in planning and scheduling.

  • Research Article
  • Cite Count Icon 1
  • 10.1609/aies.v7i1.31713
What to Trust When We Trust Artificial Intelligence (Extended Abstract)
  • Oct 16, 2024
  • Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
  • Duncan Purves + 2 more

What to Trust When We Trust Artificial Intelligence Abstract: So-called “trustworthy AI” has emerged as a guiding aim of industry leaders, computer and data science researchers, and policy makers in the US and Europe. Often, trustworthy AI is characterized in terms of a list of criteria. These lists usually include at least fairness, accountability, and transparency. Fairness, accountability, and transparency are valuable objectives, and they have begun to receive attention from philosophers and legal scholars. However, those who put forth criteria for trustworthy AI have failed to explain why satisfying the criteria makes an AI system—or the organizations that make use of the AI system—worthy of trust. Nor do they explain why the aim of trustworthy AI is important enough to justify devoting resources to achieve it. It even remains unclear whether an AI system is the sort of thing that can be trustworthy or not. To explain why fairness, accountability, and transparency are suitable criteria for trustworthy AI one needs an analysis of trustworthy AI. Providing an analysis of trustworthy AI is a distinct task from providing criteria. Criteria are diagnostic; they provide a useful test for the phenomenon of interest, but they do not purport to explain the nature of the phenomenon. It is conceivable that an AI system could lack transparency, accountability, or fairness while remaining trustworthy. An analysis of trustworthy AI provides the fundamental features of an AI system in virtue of which it is (or is not) worthy of trust. An AI system that lacks these features will, necessarily, fail to be worthy of trust. This paper puts forward an analysis of trustworthy AI that can be used to critically evaluate criteria for trustworthy AI such as fairness, accountability, and transparency. In this paper we first make clear the target concept to be analyzed: trustworthy AI. We argue that AI, at least in its current form, should be understood as a distributed, complex system embedded in a larger institutional context. This characterization of AI is consistent with recent definitions proposed by national and international regulatory bodies, and it eliminates some unhappy ambiguity in the common usage of the term. We further limit the scope of our discussion to AI systems which are used to inform decision-making about qualification problems, problems wherein a decision-maker must decide whether an individual is qualified for some beneficial or harmful treatment. We argue that, given reasonable assumptions about the nature of trust and trustworthiness, only AI systems that are used to inform decision-making about qualification problems are appropriate candidates for attributions of (un)trustworthiness. We then distinguish between two models of trust and trustworthiness that we find in the existing literature. We motivate our account by highlighting this as a dilemma in in the accounts of trustworthy AI that have previously been offered. These accounts claim that trustworthiness is either exclusive to full agents (and it is thus nonsense when we talk of trustworthy AI), or they offer an account of trustworthiness that collapses into mere reliability. The first sort of account we refer to as an agential account and the second sort we refer to as a reliability account. We offer that one of the core challenges of putting forth an account of trustworthy AI is to avoid reducing to one of these two camps. It is thus a desideratum of our account that it avoids being exclusive to full moral agents, while it simultaneously avoids capturing things such as mere tools. We go on to propose our positive account which we submit avoids these twin pitfalls. We subsequently argue that if AI can be trustworthy, then it will be trustworthy on an institutional model. Starting from an account of institutional trust offered by Purves and Davis, we argue that trustworthy AI systems have three features: they are competent with regard to the task they are assigned, they are responsive to the morally salient facts governing the decision-making context in which they are deployed, and they publicly provide evidence of these features. As noted, this account builds on a model of institutional trust offered by Purves and Davis and an account of default trust from Margaret Urban Walker. The resulting account allows us to accommodate the core challenge of finding a balance between agential accounts and reliability accounts. We go on to refine our account, answer objections, and revisit the list criteria from above as explained in terms of competence, responsiveness, and evidence.

  • Preprint Article
  • Cite Count Icon 1
  • 10.48550/arxiv.2305.19278
Enhancing Human Capabilities through Symbiotic Artificial Intelligence with Shared Sensory Experiences
  • May 26, 2023
  • arXiv (Cornell University)
  • Hongxing Rui + 2 more

The merging of human intelligence and artificial intelligence has long been a subject of interest in both science fiction and academia. In this paper, we introduce a novel concept in Human-AI interaction called Symbiotic Artificial Intelligence with Shared Sensory Experiences (SAISSE), which aims to establish a mutually beneficial relationship between AI systems and human users through shared sensory experiences. By integrating multiple sensory input channels and processing human experiences, SAISSE fosters a strong human-AI bond, enabling AI systems to learn from and adapt to individual users, providing personalized support, assistance, and enhancement. Furthermore, we discuss the incorporation of memory storage units for long-term growth and development of both the AI system and its human user. As we address user privacy and ethical guidelines for responsible AI-human symbiosis, we also explore potential biases and inequalities in AI-human symbiosis and propose strategies to mitigate these challenges. Our research aims to provide a comprehensive understanding of the SAISSE concept and its potential to effectively support and enhance individual human users through symbiotic AI systems. This position article aims at discussing poteintial AI-human interaction related topics within the scientific community, rather than providing experimental or theoretical results.

  • Research Article
  • 10.1353/tech.2000.0038
Artificial Knowing: Gender and the Thinking Machine (review)
  • Jan 1, 2000
  • Technology and Culture
  • Bayla Singer

Reviewed by: Artificial Knowing: Gender and the Thinking Machine * Bayla Singer (bio) Artificial Knowing: Gender and the Thinking Machine. By Alison Adam. London: Routledge, 1998. Pp. v+210: notes/references, bibliography, index. $75 (cloth); $22.99 (paper). Alison Adam has written a feminist polemic of limited use to historians of technology. In correctly identifying the extreme reductionism practiced by those developing “artificial intelligence” (AI) or “expert systems,” she falls into the corresponding error of overgeneralizing all the omitted aspects as “feminist.” Many readers will be uncomfortable with the style of her presentation. To quote her introduction: “I am conscious that there is, of necessity, an element of zig-zag. . . . [T]here is a fair amount of introductory material and that it is chapter three before the ‘meat course’ arrives. . . . [My book is] a Chinese banquet, made up of lots of little courses of different flavours. . . .” (p. 3). The first chapter names and describes varieties of feminist approach; the second contains an overview of AI’s history and some objections to its epistemological foundations. In the third, “meat,” chapter, Adam begins to examine two AI systems, Cyc and Soar. Her analysis continues through chapters 4 and 5, with chapter 6 presenting some suggestions for “Feminist AI Projects and Cyberfutures.” Throughout, the primary focus is feminist theory. “[I]t is the job of feminist epistemology to offer a broadside attack on traditional forms of epistemology, and to expose the ways in which women are denied the status of knowers, and what they know is denied the status of knowledge” (p. 28). “In a world where ‘expert’ almost always means white, middle-class, male experts, it is difficult to see how expert systems could contribute to the pluralistic discourse argued for by much of feminist theory” (p. 42). More than that, the experts emulated by the Cyc and Soar projects are mathematically inclined, academically gifted males. Adam convincingly demonstrates that the tasks set for Cyc and Soar are limited and highly artificial: solving mathematical puzzles rather than, say, [End Page 174] reading a newspaper with comprehension (p. 126). She further explores the “disembodied” character of “rationality” as defined and used by epistemologists and AI researchers. “[N]either Cyc nor Soar have satisfactory ways of dealing with the propositional/skills distinction. . . . This results in a very narrow conception of what it means to act intelligently” (pp. 127, 128). Although Adam gives some lip service to the fact that tacit knowledge (“know how” rather than “know what”) is involved in many aspects of technological and other forms of expertise not usually gendered feminine (e.g., pp. 12, 111), she nevertheless treats all those forms of knowledge excluded from “traditional . . . epistemology” as the proper subject of feminism rather than considering them as part of a broader critique of reductionism. In her concluding chapter, Adam begins by urging that “feminism is a political project and the best research is where action proceeds from description” (p. 156). Adam sees her work as “showing the ways in which AI can be informed by feminist theory and can be used for feminist projects.” Quoting Sue C. Jansen, Adam expects “feminist semiological guerrilla warfare . . . to transform the metaphors and models of science.” Her examples are hardly as robust as that: the first, “AI and Feminist Legal Theory,” suggests only that the expert system be “nonthreatening” to women “who have so little sense of themselves as persons with rights” that they have “difficulty in recognizing when their rights have been violated” (p. 160). In the second, “Feminist Computational Linguistics,” Adam refers to studies that have found gender differences in conversational behavior and ends by acknowledging that “the model described here is a white, middle-class, Anglo-American English one, which probably does not even fit, for example, New York Jewish speech” (p. 163). It is nowhere clear just what sort of feminist “action” Adam is proposing in this case, nor how AI expert systems might be useful to it. Adam neither promises nor delivers any historical perspective on the likely impact of their reductionist foundations on the eventual success of AI or expert systems, nor on the social influences likely to shape the product before and after it reaches the market. Her...

  • Research Article
  • Cite Count Icon 1
  • 10.1007/s13347-024-00820-1
Can We Trust Artificial Intelligence?
  • Jan 24, 2025
  • Philosophy & Technology
  • Christian Budnik

In view of the dramatic advancements in the development of artificial intelligence technology in recent years, it has become a commonplace to demand that AI systems be trustworthy. This view presupposes that it is possible to trust AI technology in the first place. The aim of this paper is to challenge this view. In order to do that, it is argued that the philosophy of trust really revolves around the problem of how to square the epistemic and the normative dimensions of trust. Given this double nature of trust it is possible to extract a threefold challenge to the defenders of the possibility of AI trust without presupposing any particular trust theory. They have to show (1) how trust in AI systems is more than mere reliance; (2) how AI systems can become objects of normative expectations; and (3) how the resulting attitude gives human agents reassurance in their interactions with AI systems. In order to demonstrate how difficult this task is, the threefold challenge is then applied to two recent accounts that defend the possibility of trust in AI systems. By way of conclusion it is suggested that instead of trusting AI systems, we should strive to make them reliable.

  • Book Chapter
  • Cite Count Icon 41
  • 10.1007/978-3-030-31284-8_10
Explainable Artificial Intelligence for Human-Centric Data Analysis in Virtual Learning Environments
  • Jan 1, 2019
  • José M Alonso + 1 more

The amount of data to analyze in virtual learning environments (VLEs) grows exponentially everyday. The daily interaction of students with VLE platforms represents a digital foot print of the students’ engagement with the learning materials and activities. This big and worth source of information needs to be managed and processed to be useful. Educational Data Mining and Learning Analytics are two research branches that have been recently emerged to analyze educational data. Artificial Intelligence techniques are commonly used to extract hidden knowledge from data and to construct models that could be used, for example, to predict students’ outcomes. However, in the educational field, where the interaction between humans and AI systems is a main concern, there is a need of developing new Explainable AI (XAI) systems, that are able to communicate, in a human understandable way, the data analysis results. In this paper, we use an XAI tool, called ExpliClas, with the aim of facilitating data analysis in the context of the decision-making processes to be carried out by all the stakeholders involved in the educational process. The Open University Learning Analytics Dataset (OULAD) has been used to predict students’ outcome, and both graphical and textual explanations of the predictions have shown the need and the effectiveness of using XAI in the educational field.

  • Research Article
  • Cite Count Icon 6
  • 10.1007/s00146-020-01020-z
Moral control and ownership in AI systems
  • Jul 22, 2020
  • AI & SOCIETY
  • Raul Gonzalez Fabre + 2 more

AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems (AIS) are increasingly being used in multiple applications and receiving more attention from the public and private organisations. The purpose of this article is to offer a mapping of the technological architectures that support AIS, under the specific focus of the moral agency. Through a literature research and reflection process, the following areas are covered: a brief introduction and review of the literature on the topic of moral agency; an analysis using the BDI logic model (Bratman 1987); an elemental review of artificial ‘reasoning’ architectures in AIS; the influence of the data input and the data quality; AI systems’ positioning in decision support and decision making scenarios; and finally, some conclusions are offered about regarding the potential loss of moral control by humans due to AIS. This article contributes to the field of Ethics and Artificial Intelligence by providing a discussion for developers and researchers to understand how and under what circumstances the ‘human subject’ may, totally or partially, lose moral control and ownership over AI technologies. The topic is relevant because AIS often are not single machines but complex networks of machines that feed information and decisions into each other and to human operators. The detailed traceability of input-process-output at each node of the network is essential for it to remain within the field of moral agency. Moral agency is then at the basis of our system of legal responsibility, and social approval is unlikely to be obtained for entrusting important functions to complex systems under which no moral agency can be identified.

  • Research Article
  • Cite Count Icon 2
  • 10.26826/law-in-context.v37i3.177
Explainable Artificial Intelligence (XAI): A reason to believe?
  • Apr 26, 2022
  • Law in Context. A Socio-legal Journal
  • Greg Adamson

Artificial intelligence is an alluring technology which companies and governments hope to benefit from. In many circumstances a condition of its use is that humans can understand an explanation of why the action of an AI system took place. This has encouraged the development of a field of “explainable artificial intelligence”, or XAI. Much of the work in this field has been encouraged by the US Defense Advanced Research Projects Agency (DARPA), through its XAI program initiated in 2016. This paper argues that an underacknowledged challenge of XAI is that unlike most traditional technology, many AI systems contain inherent uncertainty. These systems are widely described as “black boxes”, and can be described only through their behavior, a technique described in the literature as post-hoc, rather than through an understanding of their functioning. Explaining such systems is akin to explaining the functioning of the natural world, rather than explaining the functioning of a known technology. While extensive work has been undertaken to explain the behavior of black box AI systems, there are limitations to the certainty that a post-hoc method can bring. Recognizing this is an important part of understanding the limitations of post-hoc reasoning in the use of advanced AI systems. Far simpler technologies have been seen to cause significant social damage: the UK Post Office Horizon system, and the Australian federal government Robodebt program. Coming to advanced AI system examples, two recent prestigious reports on AI systems and law display an unreasoned enthusiasm for AI explainability. AI researchers should be acknowledging that many advanced AI systems remain black boxes, that post-hoc explanations of these are inferences describing how the AI system may function, not how it does function, and the application of these technologies should be managed accordingly. Otherwise, the search for explanations may simply become a reason to believe.

  • Research Article
  • 10.31305/rrijm2025.v05.n01.004
Balancing AI Innovation and Privacy: A Study of Facial Recognition Technologies under the DPDPA
  • Mar 31, 2025
  • Revista Review Index Journal of Multidisciplinary
  • Jayesh Rangari

The use of artificial intelligence facial recognition technologies poses qualitative challenges to privacy and data protection law, mainly for India’s Digital Personal Data Protection Act (DPDPA). The relationship between AI, surveillance, technologies, and legal systems is analyzed, focusing on the ways AI-FRT systems conflict with and meet the terms with data minimization, consent, algorithmic accountability, and other operationalization of rights under the DPDPA. This study examines the effects of unregulated biometric data harvesting and opaque decision-making processes in AI systems, drawing on Foucault's (1975) panopticism, algorithmic bias critique and Zuboff's (2019) surveillance capitalism theory. The methodology of this paper entails a legal comparative analysis where India’s approach towards AI regulation is adjacent with other international data protection measures like the European General Data Protection Regulation (GDPR), the US AI Bill of Rights, and China’s Integrated Governance AI policies. The results expose shortfalls in India’s legal provisions on AI, especially regarding the transparency of algorithms, rectification of AI bias, and human Intervention in automated decision making. The study highlights the existence of core data protection rights enabled by the DPDPA, however there are no clear parameters on AI governance, fairness of algorithms, or automated profiling responsibility. It puts forward suggestions for policy action to enhance AI control, including the recommendations of clear AI laws, autonomous regulatory authorities, and provisions for users to lodge complaints against violations executed by artificial intelligence. Given the current pace of India’s expansion of AI-enabled surveillance systems, the establishment of a comprehensive regulation is indispensable to achieve balance between the promotion of innovation and the protection of human rights.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.