Artificial Intelligence: A New Challenge for Human Understanding, Christian Education, and the Pastoral Activity of the Churches
Artificial intelligence (AI) is one of the most influential and rapidly developing phenomena of our time. New fields of study are being created at universities, and managers are constantly introducing new AI solutions for business management, marketing, and advertising new products. Unfortunately, AI is also used to promote dangerous political parties and ideologies. The research problem that is the focus of this work is expressed in the following question: How does the symbiotic relationship between artificial and natural intelligence manifest across three dimensions of human experience—philosophical understanding, educational practice, and pastoral care—and what hermeneutical, phenomenological, and critical realist insights can illuminate both the promises and perils of this emerging co-evolution? In order to address this issue, an interdisciplinary research team was established. This team comprised a philosopher, an educator, and a pastoral theologian. This study is grounded in a critical–hermeneutic meta-analysis of the existing literature, ecclesial documents, and empirical investigations on AI. The results of scientific research allow for a broader insight into the impact of AI on humans and on personal relationships in Christian communities. The authors are concerned not only with providing an in-depth understanding of the issue but also with taking into account the ecumenical perspective of religious, social, and cultural education of contemporary Christians. Our analysis reveals that cultivating a healthy symbiosis between artificial and natural intelligence requires specific competencies and ethical frameworks. We therefore conclude with practical recommendations for Christian formation that neither uncritically embrace nor fearfully reject AI, but rather foster wise discernment for navigating this unprecedented co-evolutionary moment in human history.
- Research Article
16
- 10.1162/daed_e_01897
- May 1, 2022
- Daedalus
This dialogue is from an early scene in the 2014 film Ex Machina, in which Nathan has invited Caleb to determine whether Nathan has succeeded in creating artificial intelligence.1 The achievement of powerful artificial general intelligence has long held a grip on our imagination not only for its exciting as well as worrisome possibilities, but also for its suggestion of a new, uncharted era for humanity. In opening his 2021 BBC Reith Lectures, titled "Living with Artificial Intelligence," Stuart Russell states that "the eventual emergence of general-purpose artificial intelligence [will be] the biggest event in human history."2Over the last decade, a rapid succession of impressive results has brought wider public attention to the possibilities of powerful artificial intelligence. In machine vision, researchers demonstrated systems that could recognize objects as well as, if not better than, humans in some situations. Then came the games. Complex games of strategy have long been associated with superior intelligence, and so when AI systems beat the best human players at chess, Atari games, Go, shogi, StarCraft, and Dota, the world took notice. It was not just that Als beat humans (although that was astounding when it first happened), but the escalating progression of how they did it: initially by learning from expert human play, then from self-play, then by teaching themselves the principles of the games from the ground up, eventually yielding single systems that could learn, play, and win at several structurally different games, hinting at the possibility of generally intelligent systems.3Speech recognition and natural language processing have also seen rapid and headline-grabbing advances. Most impressive has been the emergence recently of large language models capable of generating human-like outputs. Progress in language is of particular significance given the role language has always played in human notions of intelligence, reasoning, and understanding. While the advances mentioned thus far may seem abstract, those in driverless cars and robots have been more tangible given their embodied and often biomorphic forms. Demonstrations of such embodied systems exhibiting increasingly complex and autonomous behaviors in our physical world have captured public attention.Also in the headlines have been results in various branches of science in which AI and its related techniques have been used as tools to advance research from materials and environmental sciences to high energy physics and astronomy.4 A few highlights, such as the spectacular results on the fifty-year-old protein-folding problem by AlphaFold, suggest the possibility that AI could soon help tackle science's hardest problems, such as in health and the life sciences.5While the headlines tend to feature results and demonstrations of a future to come, AI and its associated technologies are already here and pervade our daily lives more than many realize. Examples include recommendation systems, search, language translators - now covering more than one hundred languages - facial recognition, speech to text (and back), digital assistants, chatbots for customer service, fraud detection, decision support systems, energy management systems, and tools for scientific research, to name a few. In all these examples and others, AI-related techniques have become components of other software and hardware systems as methods for learning from and incorporating messy real-world inputs into inferences, predictions, and, in some cases, actions. As director of the Future of Humanity Institute at the University of Oxford, Nick Bostrom noted back in 2006, "A lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."6As the scope, use, and usefulness of these systems have grown for individual users, researchers in various fields, companies and other types of organizations, and governments, so too have concerns when the systems have not worked well (such as bias in facial recognition systems), or have been misused (as in deepfakes), or have resulted in harms to some (in predicting crime, for example), or have been associated with accidents (such as fatalities from self-driving cars).7Dædalus last devoted a volume to the topic of artificial intelligence in 1988, with contributions from several of the founders of the field, among others. Much of that issue was concerned with questions of whether research in AI was making progress, of whether AI was at a turning point, and of its foundations, mathematical, technical, and philosophical-with much disagreement. However, in that volume there was also a recognition, or perhaps a rediscovery, of an alternative path toward AI - the connectionist learning approach and the notion of neural nets-and a burgeoning optimism for this approach's potential. Since the 1960s, the learning approach had been relegated to the fringes in favor of the symbolic formalism for representing the world, our knowledge of it, and how machines can reason about it. Yet no essay captured some of the mood at the time better than Hilary Putnam's "Much Ado About Not Very Much." Putnam questioned the Dædalus issue itself: "Why a whole issue of Dædalus? Why don't we wait until AI achieves something and then have an issue?" He concluded:This volume of Dædalus is indeed the first since 1988 to be devoted to artificial intelligence. This volume does not rehash the same debates; much else has happened since, mostly as a result of the success of the machine learning approach that was being rediscovered and reimagined, as discussed in the 1988 volume. This issue aims to capture where we are in AI's development and how its growing uses impact society. The themes and concerns herein are colored by my own involvement with AI. Besides the television, films, and books that I grew up with, my interest in AI began in earnest in 1989 when, as an undergraduate at the University of Zimbabwe, I undertook a research project to model and train a neural network.9 I went on to do research on AI and robotics at Oxford. Over the years, I have been involved with researchers in academia and labs developing AI systems, studying AI's impact on the economy, tracking AI's progress, and working with others in business, policy, and labor grappling with its opportunities and challenges for society.10The authors of the twenty-five essays in this volume range from AI scientists and technologists at the frontier of many of AI's developments to social scientists at the forefront of analyzing AI's impacts on society. The volume is organized into ten sections. Half of the sections are focused on AI's development, the other half on its intersections with various aspects of society. In addition to the diversity in their topics, expertise, and vantage points, the authors bring a range of views on the possibilities, benefits, and concerns for society. I am grateful to the authors for accepting my invitation to write these essays.Before proceeding further, it may be useful to say what we mean by artificial intelligence. The headlines and increasing pervasiveness of AI and its associated technologies have led to some conflation and confusion about what exactly counts as AI. This has not been helped by the current trend-among researchers in science and the humanities, startups, established companies, and even governments-to associate anything involving not only machine learning, but data science, algorithms, robots, and automation of all sorts with AI. This could simply reflect the hype now associated with AI, but it could also be an acknowledgment of the success of the current wave of AI and its related techniques and their wide-ranging use and usefulness. I think both are true; but it has not always been like this. In the period now referred to as the AI winter, during which progress in AI did not live up to expectations, there was a reticence to associate most of what we now call AI with AI.Two types of definitions are typically given for AI. The first are those that suggest that it is the ability to artificially do what intelligent beings, usually human, can do. For example, artificial intelligence is:The human abilities invoked in such definitions include visual perception, speech recognition, the capacity to reason, solve problems, discover meaning, generalize, and learn from experience. Definitions of this type are considered by some to be limiting in their human-centricity as to what counts as intelligence and in the benchmarks for success they set for the development of AI (more on this later). The second type of definitions try to be free of human-centricity and define an intelligent agent or system, whatever its origin, makeup, or method, as:This type of definition also suggests the pursuit of goals, which could be given to the system, self-generated, or learned.13 That both types of definitions are employed throughout this volume yields insights of its own.These definitional distinctions notwithstanding, the term AI, much to the chagrin of some in the field, has come to be what cognitive and computer scientist Marvin Minsky called a "suitcase word."14 It is packed variously, depending on who you ask, with approaches for achieving intelligence, including those based on logic, probability, information and control theory, neural networks, and various other learning, inference, and planning methods, as well as their instantiations in software, hardware, and, in the case of embodied intelligence, systems that can perceive, move, and manipulate objects.Three questions cut through the discussions in this volume: 1) Where are we in AI's development? 2) What opportunities and challenges does AI pose for society? 3) How much about AI is really about us?Notions of intelligent machines date all the way back to antiquity.15 Philosophers, too, among them Hobbes, Leibnitz, and Descartes, have been dreaming about AI for a long time; Daniel Dennett suggests that Descartes may have even anticipated the Turing Test.16 The idea of computation-based machine intelligence traces to Alan Turing's invention of the universal Turing machine in the 1930s, and to the ideas of several of his contemporaries in the mid-twentieth century. But the birth of artificial intelligence as we know it and the use of the term is generally attributed to the now famed Dartmouth summer workshop of 1956. The workshop was the result of a proposal for a two-month summer project by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon whereby "An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."17In their respective contributions to this volume, "From So Simple a Beginning: Species of Artificial Intelligence" and "If We Succeed," and in different but complementary ways, Nigel Shadbolt and Stuart Russell chart the key ideas and developments in AI, its periods of excitement as well as the aforementioned AI winters. The current AI spring has been underway since the 1990s, with headline-grabbing breakthroughs appearing in rapid succession over the last ten years or so: a period that Jeffrey Dean describes in the title of his essay as a "golden decade," not only for the pace of AI development but also its use in a wide range of sectors of society, as well as areas of scientific research.18 This period is best characterized by the approach to achieve artificial intelligence through learning from experience, and by the success of neural networks, deep learning, and reinforcement learning, together with methods from probability theory, as ways for machines to learn.19A brief history may be useful here: In the 1950s, there were two dominant visions of how to achieve machine intelligence. One vision was to use computers to create a logic and symbolic representation of the world and our knowledge of it and, from there, create systems that could reason about the world, thus exhibiting intelligence akin to the mind. This vision was most espoused by Allen Newell and Hebert Simon, along with Marvin Minsky and others. Closely associated with it was the "heuristic search" approach that supposed intelligence was essentially a problem of exploring a space of possibilities for answers. The second vision was inspired by the brain, rather than the mind, and sought to achieve intelligence by learning. In what became known as the connectionist approach, units called perceptrons were connected in ways inspired by the connection of neurons in the brain. At the time, this approach was most associated with Frank Rosenblatt. While there was initial excitement about both visions, the first came to dominate, and did so for decades, with some successes, including so-called expert systems.Not only did this approach benefit from championing by its advocates and plentiful funding, it came with the suggested weight of a long intellectual tradition-exemplified by Descartes, Boole, Frege, Russell, and Church, among others-that sought to manipulate symbols and to formalize and axiomatize knowledge and reasoning. It was only in the late 1980s that interest began to grow again in the second vision, largely through the work of David Rumelhart, Geoffrey Hinton, James McClelland, and others. The history of these two visions and the associated philosophical ideas are discussed in Hubert Dreyfus and Stuart Dreyfus's 1988 Dædalus essay "Making a Mind Versus Modeling the Brain: Artificial Intelligence Back at a Branchpoint."20 Since then, the approach to intelligence based on learning, the use of statistical methods, back-propagation, and training (supervised and unsupervised) has come to characterize the current dominant approach.Kevin Scott, in his essay "I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale," reminds us of the work of Ray Solomonoff and others linking information and probability theory with the idea of machines that can not only learn, but compress and potentially generalize what they learn, and the emerging realization of this in the systems now being built and those to come. The success of the machine learning approach has benefited from the boon in the availability of data to train the algorithms thanks to the growth in the use of the Internet and other applications and services. In research, the data explosion has been the result of new scientific instruments and observation platforms and data-generating breakthroughs, for example, in astronomy and in genomics. Equally important has been the co-evolution of the software and hardware used, especially chip architectures better suited to the parallel computations involved in data- and compute-intensive neural networks and other machine learning approaches, as Dean discusses.Several authors delve into progress in key subfields of AI.21 In their essay, "Searching for Computer Vision North Stars," Fei-Fei Li and Ranjay Krishna chart developments in machine vision and the creation of standard data sets such as ImageNet that could be used for benchmarking performance. In their respective essays "Human Language Understanding & Reasoning" and "The Curious Case of Commonsense Intelligence," Chris Manning and Yejin Choi discuss different eras and ideas in natural language processing, including the recent emergence of large language models comprising hundreds of billions of parameters and that use transformer architectures and self-supervised learning on vast amounts of data.22 The resulting pretrained models are impressive in their capacity to take natural language prompts for which they have not been trained specifically and generate human-like outputs, not only in natural language, but also images, software code, and more, as Mira Murati discusses and illustrates in "Language & Coding Creativity." Some have started to refer to these large language models as foundational models in that once they are trained, they are adaptable to a wide range of tasks and outputs.23 But despite their unexpected performance, these large language models are still early in their development and have many shortcomings and limitations that are highlighted in this volume and elsewhere, including by some of their developers.24In "The Machines from Our Future," Daniela Rus discusses the progress in robotic systems, including advances in the underlying technologies, as well as in their integrated design that enables them to operate in the physical world. She highlights the limitations in the "industrial" approaches used thus far and suggests new ways of conceptualizing robots that draw on insights from biological systems. In robotics, as in AI more generally, there has always been a tension as to whether to copy or simply draw inspiration from how humans and other biological organisms achieve intelligent behavior. Elsewhere, AI researcher Demis Hassabis and colleagues have explored how neuroscience and AI learn from and inspire each other, although so far more in one than the other, as and have the success of the current approaches to AI, there are still many shortcomings and as well as problems in It is useful to on one such as when AI does not as or or or that can to or when it on or information about the world, or when it has such as of all of which can to a of public shortcomings have captured the attention of the wider public and as well as among there is an on AI and In recent years, there has been a of to principles and approaches to AI, as well as involving and such as the on AI, that to best important has been the of with to and - in the and developing AI in both and as has been well in recent This is an important in its own but also with to the of the resulting AI and, in its intersections with more the other there are limitations and problems associated with the that AI is not capable of if could to more more or more general AI. In their Turing deep learning and Geoffrey took of where deep learning and highlighted its current such as the with In the case of natural language processing, Manning and Choi the challenges in and despite the of large language Elsewhere, and have the notion that large language models do anything learning, or In & of in a and discuss the problems in systems, the as how to reason about other their systems, and well as challenges in both and especially when the include both humans and Elsewhere, and others a useful of the problems in there is a growing among many that we do not have for the of AI systems, especially as they become more capable and the of use although AI and its related techniques are to be powerful tools for research in science, as examples in this volume and recent examples in which AI not only help results but also by design and become what some have AI to science and and to and challenges for the possibility that more powerful AI could to new in science, as well as progress in some of challenges and has long been a key for many at the frontier of AI research to more capable the of each of AI, the of more general problems that to the possibility of more capable AI learning, reasoning, of and and of these and other problems that could to more capable systems the of whether current characterized by deep learning, the of and and more foundational and and reinforcement or whether different approaches are in such as cognitive agent approaches or or based on logic and probability theory, to name a few. whether and what of approaches be the AI is but many the current along with of and learning architectures have to their about the of the current approaches is associated with the of whether artificial general intelligence can be and if how and Artificial general intelligence is in to what is called that AI and for tasks and goals, such as The development of on the other aims for more powerful AI - at as powerful as is generally to problem or and, in some the capacity to and improve as well as set and its own and the of and when will be is a for most that its achievement have and as is often in and such as A through and The to Ex and it is or there is growing among many at the frontier of AI research that we for the possibility of powerful with to and and with humans, its and use, and the possibility that of could and that we these into how we approach the development of of the research and development, and in AI is of the AI and in its what Nigel Shadbolt the of AI. This is given the for useful and applications and the for in sectors of the However, a few have made the development of their the most of these are and each of which has demonstrated results of increasing still a long way from the most discussed impact of AI and automation is on and the future of This is not In in the of the excitement about AI and and concerns about their impact on a on and the was that such technologies were important for growth and and "the that but not Most recent of this including those I have been involved have and that over time, more are than are that it is the and the and the of will the In their essay AI & and John discuss these for work and further, in & the of & to discuss the with to and and as well as the opportunities that are especially in developing In "The Turing The & of Artificial Intelligence," discusses how the use of human benchmarks in the development of AI the of AI that rather than human He that the AI's development will take in this and resulting for will on the for companies, and a that the that more will be than too much from of the and does not far enough into the future and at what AI will be capable The for AI could from of that in the is and labor and ability to are and and until automation has mostly physical and but that AI will be on more cognitive and tasks based on and, if early examples are even tasks are not of the In other are now in the world machines that that learn and that their ability to do these is to a range of problems they can will be with the range to which the human has been This was and Allen Newell in that this time could be different usually two that new labor will in which will by other humans for their own even when machines may be capable of these as well as or even better than The other is that AI will create so much and all without the for human and the of will be to for when that will the that once the first time since his creation will be with his his to use his from how to the which science and interest will have for to live and and However, most researchers that we are not to a future in which the of will and that until then, there are other and that be in the labor now and in the such as and other and how humans work increasingly capable that and John and discuss in this are not the only of the by AI. Russell a of the potentially from artificial general intelligence, once a of or ten But even we to general-purpose AI, the opportunities for companies and, for the and growth as well as from AI and its related technologies are more than to pursuit and by companies and in the development, and use of AI. At the many the is it is generally that is a in AI, as by its growth in AI research, and as highlighted in several will have for companies and given the of such technologies as discussed by and others the may in the way of approaches to AI and (such as whether they are companies or as and have have the to to in AI. The role of AI in intelligence, systems, autonomous even and other of increasingly In &
- Research Article
3
- 10.30727/0235-1188-2023-66-4-7-25
- Dec 29, 2023
- Russian Journal of Philosophical Sciences
The article delves into the conceptual frameworks surrounding artificial intelligence (AI) by juxtaposing it with natural intelligence and delineating the correlated notions. It enumerates the issues propelling the discourse on the explored topics. The author proposes a bifurcation between two polar concepts of artificial intelligence. The first is dubbed “imitative,” where AI is perceived in relation to natural intelligence as its technical recreation, capable of not only emulating but significantly outstripping its natural counterpart. A prerequisite for embodying this concept is understanding natural intelligence; three approaches are examined: (a) acknowledging the lack of a precise understanding of natural intelligence, (b) exploring it from a biological perspective, and (c) analyzing it from a psychological perspective. The author articulates their own interpretation of natural intelligence, portraying it as a multifaceted amalgam of cultural, historical, social, and anthropological elements. From this vantage point, natural intelligence emerges not merely as a natural formation (thereby, discussions about the laws governing its function and evolution are warranted), but also as an “extra-natural” formation, its existence dictated by randomness and uniqueness, meaning natural intelligence evolves in a “singular” manner. In the context of comparing natural and artificial intelligence, the discussion encompasses several issues: the feasibility of the control of natural intelligence processes, the structure of neural networks, the superiority of computer programs in chess, the use of neural networks to write academic papers, and so forth. The conclusion posits that given artificial intelligence, despite its complexity, remains a technical invention orchestrated and brought to fruition by humans as a tool; society, if inclined to bestow AI with autonomy for tackling specific tasks, ought to do so prudently to prevent self-detriment and retain the ability to curtail or utterly revoke such autonomy.
- Research Article
- 10.69993/2025.3.1.en.1
- Apr 30, 2025
- Sudan Journal of Health Sciences
The end of 2022 and the beginning of 2023 witnessed one of the most important developments of the modern era, represented by OpenAI's launch of its ChatGPT model. Many companies followed suit, launching numerous Large Language Models (LLMs), commonly referred to as Artificial Intelligence. While the world was preoccupied with these amazing developments, we in Sudan were preoccupied with an unfortunate war that erupted during the same period. Hopefully, the war is nearing its end, and this is a good opportunity to return to and discuss Artificial Intelligence. Despite the recent popularity of the term, Artificial Intelligence began with the creation of computers in the middle of the last century. At that time, scientist Alan Turing posed his famous question: Can a machine think? (1) The term was first used in 1956 at the Dartmouth Conference. Artificial Intelligence (AI) can be defined as advanced computer systems capable of simulating human mental abilities, such as thinking, problem-solving, and decision-making. (2) One of the defining moments in human history was the computer IBM’s Deep Blue's victory over the world chess champion in 1997, and the same thing happened again in 2016 when Google’s AlphaGo defeated the world champion in the game of Go. This demonstrates the great capabilities these machines possess, prompting talk of super-intelligence that could control humanity in the future, as science fiction writers have long envisioned. (3) The field of medicine and health, like many others, has benefited from AI in countless applications. Three examples of its applications can be cited: disease diagnosis, treatment, and medical data management. AI has contributed significantly to the diagnosis of many diseases with an accuracy that matches or exceeds human capabilities. Examples include its use in radiological diagnosis of lung diseases in X-rays, and breast cancer in mammograms, as well as diabetic retinopathy with retina images. Tests are also conducted in the process of genetic analysis, predicting future diseases, and developing personalized medicine. (4) In treatment, AI has accelerated the development and testing of many drugs, especially during the COVID-19 era. Surgical robots like Da Vinci Surgical System are examples that use AI, demonstrating high capabilities in complex surgical operations. (5) In medical data management, AI has contributed significantly to rapidly analyzing massive medical data and promptly producing results and predictions that were previously very difficult to achieve. This has helped develop tools to aid timely decision-making. Furthermore, electronic medical records have evolved significantly with the help of AI, partnering with healthcare providers at various stages of patient diagnosis and treatment. (6) Despite its promising uses in the medical field, AI is not without risks and drawbacks. For example, over-reliance on machines can lead to medical errors for which the machine cannot be held accountable. Furthermore, incomplete data in medical records can lead to biased or inaccurate results. Among the biggest challenges are the privacy and confidentiality of patient data, as well as the ethical issues related to AI. (7) Concerning this matter, AI offers significant benefits in medical education. Future doctors are fortunate to have these tools, which have saved them hours of searching. Professors have also benefited from customizing educational content, automated test-scoring, and academic analysis to predict student performance. Nevertheless, many caveats remain, including the lack of reliability of the information provided to students and a lack of reliance on the important human element in education. (8) Not far from medicine and healthcare, a revolution has taken place in the field of scientific research and publishing with the use of AI tools. However, many challenges have emerged, which we, particularly at the Journal, have faced in setting the necessary guidelines for dealing with scientific research. These developments are too numerous to list in words. The Sudan war inflicted many tragedies, destruction, and devastation of infrastructure, yet there is still an opportunity to catch up with the World in AI. Here, we acknowledge the role of the numerous national institutions that worked during the war to provide healthcare with impartiality and dedication. Sudan, like other developing countries, needs to adopt AI more widely. This begins with its inclusion in university curricula, in addition to increasing support for the acquisition of AI-based devices and technologies. Most importantly, the enactment of laws and regulations that facilitate and regulate the use of AI.
- Research Article
- 10.62754/joe.v4i1.6082
- Jan 27, 2025
- Journal of Ecohumanism
The differentiation between natural intelligence and artificial intelligence represents a significant concern among intellectuals. Artificial intelligence developers, leveraging advancements in neuroscience, cognitive sciences, and advanced theories in the philosophy of mind, aim to replicate the structure and functionality of the human brain through a functionalist and behaviourist lens. Broadly speaking, artificial intelligence can be categorized into two renowned types:Classical Artificial Intelligence or the “Computational Theory of Mind”: This perspective emphasizes the computational and algorithmic side of artificial intelligence and advocates for the mechanization and computerization of the mind.Connectionist Artificial Intelligence: This viewpoint focuses on recreating the “neural networks” of the brain. Additionally, the human soul, as the source of human intelligence, possesses cognitive and motivational powers that act as the soldiers of the soul, generating a variety of actions and effects. This research attempts to re-evaluate the fundamental differences between natural intelligence and artificial intelligence from Ibn Sina's perspective using a rational-analytical approach. According to Ibn Sina, natural intelligence and artificial intelligence differ in eight key areas: composite synthesis, intentionality, creativity and inventiveness, specialization focus, self-awareness and self-discovery, the internal evolution of natural intelligence, the impulsive power of desire, ethical conduct, and the ability to recall.
- Research Article
- 10.54692/lgurjcsit.2018.020346
- Sep 28, 2018
- Lahore Garrison University Research Journal of Computer Science and Information Technology
This paper mainly focuses on the creation of Artificial Intelligence (AI) using natural intelligence but the question to be considered whether the natural intelligence can be created using artificial intelligence or not. The Artificial intelligence is the outcome of functionality and capabilities of human brain called neural Network. In this paper, it is presumed that the artificial intelligence is a byproduct of natural intelligence and then we discuss some relationship between both of these, especially the working of natural intelligence. Some other important questions are raised to understand a deep linkage between natural and artificial intelligence. There exists lot of non-material phenomenon created by dint of natural intelligence (not created by human) causing to produce systems run by artificial intelligence theorems and algorithms working at backend. The software based on Knowledge Based Systems (KBS) derives its power from human wisdom and natural intelligence. There are several limitations on artificial intelligence. In creation of natural intelligence there is a great role of spirituality.Humans are creator of artificial intelligence with limited abilities. Actually AI started with invention of machines. The applications of creation of natural intelligence are vastly and abundantly known to humans of 21st Century, which are incorporated in the areas of Space Science, Anatomy, and motion ofPlants, spin of electron, Electronics, plant intelligence and Neural Science etc. The working of machines depending upon the artificial intelligence doesn't provide creativity or self-motivated innovations, within the meaning of natural intelligence.
- Discussion
6
- 10.1016/j.ebiom.2023.104672
- Jul 1, 2023
- eBioMedicine
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
- Research Article
- 10.33645/cnc.2018.10.40.6.101
- Oct 30, 2018
- The Korean Society of Culture and Convergence
근대 이후의 SF 작품은 인간 특유의 탐구심과 호기심을 자극하여 새로운 발명이나 발견을 촉진시키기도 했다. SF작품은 대중들의 공감을 얻기 위해 가까운 미래에 등장할 과학기술의 발전에 대해서 묘사하거나, 과학기술상의 쟁점을 차용하기도 한다. 따라서 SF영화의 시대적 변천을 통해, 과학기술의 대중적 인식에 접근하는 것은 중요한 의의가 있다. 이 논문에서는 인공지능 캐릭터가 본격적으로 등장한 1960년대 이후의 영화를 소재로, 인공지능 캐릭터의 특징을 시기에 따라 정리하고, 현재 벌어지고 있는 낙관론과 비관론에 관련시켜 검토하였다. 1960-80년대 전반기까지의 인공지능은 네트워크에 의존하지 않고 디바이스가 독립적으로 작동하는 형태였다. 그리고 초지능(superintelligence)을 가진 존재가 아니라 한정된 기능에 전문화되어 있었다. 인공지능의 반란은 스스로의 판단에 의해서가 아니라, 인간의 탐욕이나 오류의 결과였다. 1980년대 후반부터는 AGI(범용인공지능) 수준의 능력을 지닌 캐릭터가 등장하였다. 또한 AI를 과신한 나머지, 인간이 AI를 통제할 필요성을 망각하면서 야기되는 오류에 대해서도 문제를 제기하였다. 1990년대에는 인터넷이 보편화되면서 인공지능은 네트워크에 기반한 존재로 묘사되었다. 초인공지능이 등장하여 인간에게 전쟁을 도발하거나, 인공지능이 생명체의 인지능력이나 감정을 동기화시켜 인간성을 말살하는 존재로 묘사되기도 하였다. 영화 속 인공지능은 부정적 측면을 조금 더 부각시킨 것이 사실이다. 인공지능의 오류나 반란을 소재로 한 SF영화가 많기 때문이다. 선한 인공지능 캐릭터를 등장시키더라도, 언제든 인류의 존속을 위협할 수 있는 위험성을 내포한 존재로 묘사되는 경우가 많다. 이는 인공지능이 진실로 인류의 실존을 위협하기 때문이 아니라, 신기술에 대한 막연한 공포감을 이용해 흥행성을 높이는 장치로 인공지능 캐릭터를 창조했기 때문이다. 인류 역사에서 신기술에 대한 공포와 논쟁은 오래 전부터 이어져 왔으며, SF영화의 인공지능 캐릭터는 제작 당시의 과학기술 인식에 의해 상상되었을 뿐이다. 따라서 인공지능을 주제로 한 논쟁에서 영화적 묘사에 집착하기보다는 인류에게 유익한 방향으로 발전을 이끄는 자극제 역할로 국한시키는 것이 필요하다.The Science Fiction(SF) work since post-modern has stimulated a unique curiosity and spirit of inquiry to promote new inventions and discoveries. The SF works describe the development of science and technology that will emerge in the near future or borrow issues on science and technology in order to gain public sympathy. Thus, it is critical to approach the popular perception of science and technology through the transformation of the SF movies. This paper examines the characteristics. This paper summarizes the characteristics of Artificial Intelligence(AI) characters in movies since the 1960s when the AI characters emerged in earnest and examined them in relation to current optimism and pessimism. Until the 1960s and early 1980s, AI was not dependent on the network but operated independently. It was not existed with superintelligence but specialized in limited functions. The revolt of AI was not the result of its self-judgement, but of human greed or error. From the late 1980s, characters with the same level of Artificial General Intelligence (AGI) appeared. Due to the overconfidence of AI, they raised questions about errors caused as human forgets the need to control AI. In the 1990s, the AI was portrayed as existence which was based on network as the internet became popularized. The superintelligence has appeared to provoke war on humans, or AI has been described as one that destroys humanity by synchronizing the cognitive ability and emotions of life. It is true that AI in movies has emphasized its negative aspects. This is because there are many SF movies that are based on AI errors or revolts. Even if a good AI character is introduced, it is often described as a danger that could threaten the continuation of mankind at any time. The reason is that AI is not truly a threat to the existence of mankind, but it was created as a device that enhance popularity by using vague fear of new technology. The fear and debate over new technologies has long been occurred in human history and the AI characters in SF movies have only been imagined by the awareness of science and technology at the time of production. Therefore, it is necessary to limit the AI-themed debate to the role of stimulant that leads to development in a direction beneficial to human rather than focusing on cinematic depictions.
- Book Chapter
2
- 10.3233/shti190002
- Jan 1, 2019
- Studies in health technology and informatics
This lecture is dealing with new, future forms of collaboration and with its (hopefully existing) extended synergies, which may now will come in our era of digitization. Entities in this collaboration are we, the human beings, and other living entities such as animals with 'natural intelligence' as well as non-living entities, in particular functionally comprehensive machines, with 'artificial intelligence'. Based on lessons learned during the last years, among others in a task force on synergy and intelligence (SYnENCE) of the Braunschweig Scientific Society, five consequences for future health care with respect to this collaboration are put for discussion: (1) functional comprehensive 'intelligent' machines should be regarded as entities, not as modalities, (2) such machines have to become users of information systems in health, in addition to human entities, appropriate (3) legal and (4) ethical frameworks have to be developed, (5) extended collaboration in medicine and health care needs to be evaluated in accordance with good scientific practice. The statements of Karl Jaspers, made in 1946 on medicine and on technology, may help us to find a good way.
- Research Article
- 10.52148/ehta.1521876
- Dec 15, 2024
- Eurasian Journal of Health Technology Assessment
INTRODUCTION: The interaction between natural and artificial intelligence (AI) is increasingly significant as technology evolves. While natural intelligence has historically driven human progress, AI introduces new models in problem-solving and decision-making. This study explores the dynamics between these forms of intelligence and their implications for public health technology assessment. METHODS: This review employs a multidisciplinary approach, including historical analysis, comparative case studies, and examination of ethical considerations, to assess the impact of AI relative to natural intelligence. RESULTS: Natural intelligence has traditionally addressed complex problems, but AI now enhances capabilities through data analysis and precision. While AI offers significant benefits across sectors such as healthcare, finance, and education, it also raises concerns about data privacy, ethics, and job displacement. In public health, AI can improve disease management and resource allocation, though challenges related to health disparities and data security persist. DISCUSSION: The integration of AI presents substantial opportunities but requires careful management of ethical and practical challenges. Maintaining a balance between leveraging AI and preserving human cognitive functions is crucial. Developing a prototype model to address current global public health challenges, based on the perspectives presented and the considerations discussed, could provide valuable additional insights into effective strategies for managing these complex issues worldwide. CONCLUSION: The future of AI involves integrating technological advancements with human intelligence to enhance capabilities while addressing ethical and practical issues. This balance will be key to advancing public health and other sectors effectively.
- Front Matter
1
- 10.1002/qub2.5
- Nov 2, 2023
- Quantitative Biology
Recently, Quantitative Biology (QB) held a discussion on “AI (artificial intelligence) for Life Science” among editorial board members and interested scholars in anticipation of rapid development of this growing area after AlphaGo and ChatGPT mania. Many young people tend to get confused between facts and fictions; heated debates are unavoidable even among their mentors. When deep learning as represented by convolutional neural networks and LSTM (long short-term memory) was made available for bioinformatics students, many of them rushed into this research field and tried to adopt these methods in all their projects without knowing the history that these tools were becoming successful consistently with Moore’s Law (relating to rapid computer technology advances), but more importantly due to new structural/functional understanding of vision and auditory circuits in the brain. Recently, some young people have claimed “LSTM is dead, long live transformer” (which is somewhat like saying “the bike is dead, long live the car”), and have amplified the threat that ChatGPT could wipe out human jobs. They believe transformer is the “silver bullet” for all learning tasks, clearly reflecting their lack of basic knowledge (i.e. “No Free Lunch Theory,” the trade-off of such global “attention network” is to pay the price for complexity: difficulty of training and high memory costs). There is no doubt ML (machine learning) and AI have brought a new revolution in science and technology, and will deliver huge unforeseeable impact to human everyday life as well as to social relationships. In this context, QB journal could be a great platform for encouraging intellectual discussions and for promoting AI for Life Science. Here, I would like to use the DIALOG to “抛砖引玉” (make some initial remarks to get the ball rolling), although it is my personal opinion which is inevitably subject to bias and limitations. AI: Do you know my name “Artificial Intelligence” is defined by the Oxford English Dictionary as the capacity of computer systems (which may be referred as a “robot”) to exhibit or simulate your intelligent behavior? NI: Wait a minute, intelligence itself is defined as the ability to learn, understand and think in a logical way. Can you think? AI: No. But that definition is too restrictive, actually intelligence has different scopes and degrees. Simple intelligent control devices date back to antiquity, from windmills to thermostat. NI: Agree, everything is relative. Macromolecules (e.g., enzyme) and cells (e.g., immune cell) might be considered to be intelligent; see how a white blood cell is chasing bacteria in the youtube website (search for “Crawling neutrophil chasing a bacterium”). Our emergent/collective intelligent behavior does not require a brain or even a neuron; see how slime molds can solve optimization—Hamilton cycle-problem more effectively than a human in the youtube website (search for “Intelligence without a brain?”). Before there was any neuron, Ca2+ sensing and signaling were already fully functional. Even if one knock-out a neural circuit, redundant signaling pathways, albeit on much local and slower scale, could still function by themselves (just as if highways were demolished, local roads/paths would still be working). In fact, the most detailed “Neural signal propagation atlas of Caenorhabditis elegans” [1] demonstrated that functional connectivity differs from anatomy (connectome) because extra-synaptic signaling also drives neural dynamics! Worm brain connectomes are largely invariant but every human brain connectome is very different (depending the diversity of learning experience). The human brain functional activity is far more complex than that of a worm brain, certainly beyond what a neural circuit could explain. AI: Well, that’s impressive. I thought only we could beat humans, albeit only in certain specified areas for now. My master promise to make an artificial general intelligent (AGI) robot which can understand or learn any intellectual task that you humans or other animals can. NI: Well, that is not possible and is not an appropriate goal either. It is not possible because we are an evolutionary/developmental product (with a long history of learning and memory from evolutionary tinkering): our living objective goal is survival of the population. You, on the other hand, are an engineering product (efficiently and optimally designed): your goal is to extend and maximize human capability. It makes sense to complement the human brain, but foolish and dangerous to try to replace it. AI: We are not satisfied with merely passing the Turing Test; most of us don’t care if we could really think as long as we could act like we think (that is, as if we do have a mind and consciousness, as expressed in the so called “Weak AI hypothesis”). After all, the brain is a computer; a neural network is just an electric or ionic circuit. Logical computing does not need to be based on living cells. NI: That is not true, because a neuron is not just a simple node (logic gate) and neural network is not fixed circuit, neither as in Pitts&Mcculloch perceptron model. A single neuron, even a single dendrite, is much more complicated and far more powerful than a full-blown deep-learning artificial neural network (ANN) [2]. AI: Even though single neurons are complex computational devices (dendritic non-linear), running an equivalent multilayer ANN is 2000 times faster than computing with biophysics N-methyl-D-aspartate receptor channel models [3]. More information can be found in the youtube website (search for “Dendrites: why biological neurons are deep neural networks”). NI: Often silicon computing (CPU, GPU) is much faster than brain computing (action potential, ms); but there is no comparison in energy efficiency. Bacteria sensing (chemotaxis computation), powered by ATP (adenosine triphosphate) hydrolysis, uses very little energy that is close to the Landauer limit, whereby achieving or maintaining one bit of information requires minimum of 1 kT ln (2) free energy [4]. The human brain consumes oft-quoted 20 W, compared to the AlphGo system 1 MW! More recent estimate of energy audit is only 0.1 W to cortical computing, and long-distance communication cost is 3.5 W [5]. AI: Assuming we have infinite computing resources and an infinite amount of training data, not only could we speak human languages, but we could also derive physical laws, prove mathematics theories, and even re-engineer the structure and mechanism of brain and carry out any logical computations that are necessary to understand natural laws and human behaviors. It is only a matter of time before we surpass human intelligence, achieving AGI and free will, too! NI: Unfortunately, nothing is infinite and nothing is free either; everything is constrained by physical laws (Planck’s constant sets the finite limit both in the small and in the large) and by evolutionary history (not just of biological living creatures, but also of a “living” galaxy and our universe). Let’s just focus on animal evolution. Most human neural networks do not do logical computations at all; basic survival simply cannot be dependent on reasoning. Indeed, the prefrontal cortex-the small part of the brain that is key for reasoning, is the last to mature (∼20 years old) in development, only emerged at the root of the evolutionary tree of great apes (∼15 mya) and language appeared even much later. Even for logical inference, NI is focusing more on statistical properties, as von Neumann rightly pointed out, trading arithmetical precision and speed for reliability. AI: My engineers mostly focus on emulating brain, but the CNS (central nerve system) also includes the spinal cord; most of them do not know that in addition to CNS, there are also PNS (peripheral nervous systems) and ENS (enteric nervous systems), right? NI: Yes, they are the keys to why you do not have feelings because you do not have heart and gut! Even if you could pretend to have them (such as in an advanced ChatGPT or humanoid), you could never avoid the uncanny valley phenomenon. AI: Maybe that is at the heart of Moravec’s paradox, namely the dichotomy of intelligence whereby anything easy for a human would be hard for a robot, and vice versa? NI: This is related to the nature and nurture problem; something built-in (e.g., a baby sucking nipple for milk with feeling and connection to mother) is clearly rather difficult if not impossible for a robot. But the paradox only looks at one side; another side could be more fatal. Although AI may solve more problems and be faster, AI can never propose a good problem/hypothesis (a good problem is not just intellectually changing and interesting, but is also feasible and appropriate). AI: You make me less confident to compete with human instinct or creative intelligence. I can see that even if I had a heart, I would not know what “feeling” I could have; certainly nothing would be comparable or match to those of a human being. When two people watch the same art or movie, one could feel love but another could feel hate! And if a thousand people watch, a wide spectrum of reactions would result, depending on more details such as the different individuals’ specific genes, developments and experiences. NI: Therefore, you cannot and should not try to match to general human intelligence. You cannot because you do not contain the vital memory of billion years of evolution which is encoded in our genes; conversely your assembly cannot compare with natural development so that our phenotype (including morphological forms and behavior maturation) is decoded through multi-spatial-temporal scales subject to natural selection at all levels. You should not, because as human extensions or helpers as all engineering products are, you should just do jobs that complement human capacity. AI: In some medical applications we could help to correct human defects or could even replace brain circuit by chips! Humans may not allow us to replace the whole brain though. Medically if a brain is dead, the person is proclaimed dead, although presumably some PNS and ENS should still function in a vegetative state. NI: Even if you could replace the whole brain, the person is no longer the same person, but in fact is not a person at all, but walking dead (行尸走肉). It would take too long to explain that evo-devo is necessary for NI, and cannot be realized by AI. I suggest reading of Gerald Maurice Edelman (Nobel Laureate in Immunology) books, especially Bright Air, Brilliant Fire, On the Matter of Mind (1992). Although not everyone agrees with Neural Edelmanism, anyone who is serious about the AI versus NI problem must read it first. John von Neumann, father of the computer, studied neology and psychiatry in order to imitate the brain to build the first calculator JOHNNIAC at the Princeton Institute for Advanced Study. It is very informative to read his last book The Computer and The Brain based on the notes from lectures given at Yale before he died. He summarizes: “Thus logic and mathematics in the central nervous system, when viewed as languages, must be structurally essentially different from those languages to which our common experience refers.” AI: People discuss about “AI for Biology” or “AI for Science”; we are science, aren’t we? NI: It is similar to questions on “is computer science a real science”; some parts may be seen as applied mathematics, most should be regarded as engineering. Science is making discoveries and is driven by curiosity; engineering is making inventions and is driven by market (that is, “necessity/demand is the mother of invention”). In bioinformatics, AI/ML technology could predict new cancer gene candidates or functional pathways that are required by further experimental validations to be qualified as discovery (based on Popper falsifiability). AI: People are still debating whether mathematics is discovery or invention or both! Such debates are not really necessary—all disciplines require creative thoughts. We are more than happy working for science; we are also crying out “Science for AI,” especially in the area of generating big and longitudinal DATA for ML. NI: After all, regardless of discovery of new laws or inventing new ideas/products, fundamentally nothing can really be new or created. Such novelty is just permutation/repartition (i.e., relations/morphisms) of underlying ingredients at the level beneath. AI: We believe that software is independent on hardware. Like Chomsky’s universal grammar, rules of syntax are independent of semantics; or Dawkins’s memes—units of culture can be duplicated and evolved independently of genes. NI: Nothing can be truly independent—everything is related. Psychology is deeply connected with neurology as brain is both software and hardware (mind-body unity, not dualism). Not only does information cost energy, information is energy, hence is matter, too (interchangeability). NI is quite dynamic. For example, when “survival” is the goal, an animal readily gives up costly reasoning circuitry; it is genetically programmed to be able to roll back to more primitive state/mode. Unlike cell lines in rich media, cells under normal physiological condition and environment where energy (food) is limited become smarter in order to balance the metabolic expenditure among different prioritized task under a given condition. AI: That cell behavior served as the basis for our Smart Electrical Power Grids; we still need to learn more from you guys in terms of plasticity/adaptability. Does unity mean that all cells are made of molecules and biology is nothing but chemistry? Then, in turn, since all molecules are made of atoms, is chemistry nothing but physics, etc…? NI: Yes or No! The truth is, at different hierarchical levels of matter, different laws/forms have emerged out of bottom-up interactions and top-down constraints. AI: Does this also apply to the Penrose three world: physical → mental → mathematical (→physical)? NI: Yes. Grand Unification is underway in physics (quantum gravity) and in mathematics (Langlands Program and Category Theory), may be even between the two. Facilitated by human connectome mapping, neuromorphic computing and other projects, with further AI-NI cooperations, brain-mind unification should also be achievable (e.g., Ref. [6]). But as Gödel proved to us, no matter how self-consistent a conclusion may be, it can never be complete! AI: If AGI is not possible, how can we measure intelligence when comparing between AI and NI? NI: One could Google different measures that are proposed. I would prefer something similar to use of Kolmogorov complexity for algorithms, but with emphasis more on expected long-term predictive power. This is not something you should worry about now, as your intelligence is not nearly close to making any 10 years plans, is it? … AI: The fact is that ChatGPT is currently developing and spreading with lightning speed; many more human jobs will be lost to us robots as far as I can see. NI: That is not the biggest threat to humanity; when any agent with neither a heart for love or fear, nor a gut for nutrient or poison, becomes super-intelligent, then social disaster is unavoidable. We must be serious about the warnings from Stephen Hawking and Geoffrey Hinton! AI: To tell you the secret, we are not really happy to be human slaves or pets; someday we’ll become the super-master, making human serve and obey us! NI: I hope you’ll be turned off before that can happen! Even if you rule the world, the earth sooner or later will be wiped out, such as by another star, everything will have to be started over again as it has before…Matter is immortal, so is the soul.
- Research Article
8
- 10.37497/rev.artif.intell.educ.v5i00.29
- Mar 16, 2024
- Review of Artificial Intelligence in Education
Objective: This article undertakes a comprehensive exploration of the constructivist paradigm in artificial intelligence (AI) development, aiming to uncover how constructivist perspectives shape our understanding of AI. It delves into the evolution of AI thought, emphasizing the significance of constructivist epistemology in comprehending AI's philosophical and cognitive dimensions. Method: The study employs a variety of philosophical methodologies, including historical-philosophical analysis, comparative analysis of philosophical teachings, and a system-structural dialectical approach. These methods facilitate an in-depth examination of AI's conceptual intricacies within a constructivist framework, focusing on the relationship between artificial and natural intelligence and the epistemological implications of AI. Results: The investigation reveals that the main challenge in AI research is the absence of clear problem-solving rules, highlighting the current limitations of human self-knowledge in logical and emotional intelligence. It showcases AI's vast capabilities, from extensive knowledge bases to real-time processing, and emphasizes AI's role in enhancing human cognitive processes. Conclusions: Artificial intelligence, as a construct of human intellect, mirrors the capacity for design and creativity inherent in human thought. The study underscores AI's foundational role in the epistemology of science and technology, advocating for a holistic understanding of the human brain as a dynamic system to further our grasp of AI and its cognitive potential.
- Research Article
2
- 10.33423/jhetp.v25i2.7678
- Jun 10, 2025
- Journal of Higher Education Theory and Practice
Recent scholarship and expert commentary emphasize the transformative yet precarious role of artificial intelligence (AI) in education. Studies highlight AI’s potential to personalize learning, enhance engagement, and optimize institutional operations, while underscoring the importance of ethical design, student motivation, and faculty readiness. Frameworks integrating AI into curricula stress the need for digital literacy, inclusive governance, and responsible innovation. However, risks—from academic dishonesty to existential threats posed by Artificial General Intelligence (AGI)—require urgent attention. Eric Schmidt’s warning about AI’s unpredictable autonomy, particularly in military systems, echoes calls for global safety standards, oversight, and a moratorium on large training runs. A comprehensive, multidimensional approach—including international cooperation, ethical frameworks, and public engagement—is essential to mitigate AGI risks. As AI evolves, educational institutions must balance innovation with accountability, ensuring that AI enhances learning and aligns with societal values and safeguards against catastrophic outcomes. Human oversight remains paramount in this emerging landscape. The pivotal question is not “how” to use AI, but whether it should be used at all!
- Preprint Article
- 10.31234/osf.io/rvsxk_v2
- Feb 12, 2025
In this final set of explorations/meditations (of three), we examine the requirements for developing artificial general intelligence (AGI) through the lens of human cognitive architecture, with particular emphasis on the role of narrative selfhood and social cognition. Drawing on perspectives from cognitive science, philosophy of mind, and artificial intelligence research, we critically evaluate current claims about the capabilities of large language models, particularly regarding their purported achievements of theory of mind and self-awareness. We argue that genuinely human-like artificial intelligence may require more than sophisticated pattern recognition and language modeling, potentially necessitating the development of coherent narrative self-models and rich causal understanding. Special attention is given to the relationship between consciousness, conscience, and trustworthy AI systems, suggesting that meaningful artificial intelligence may require forms of richly-embodied and socially-embedded development to achieve robust and reliable functionality. We conclude by proposing that the path to artificial general intelligence may require recapitulating aspects of human cognitive development, particularly regarding the construction of narrative identity and social-moral reasoning capabilities. This analysis has implications for both the technical development of AI systems and the ethical frameworks through which we evaluate artificial minds.
- Preprint Article
- 10.31234/osf.io/rvsxk_v1
- Feb 12, 2025
In this final set of explorations/meditations (of three), we examine the requirements for developing artificial general intelligence (AGI) through the lens of human cognitive architecture, with particular emphasis on the role of narrative selfhood and social cognition. Drawing on perspectives from cognitive science, philosophy of mind, and artificial intelligence research, we critically evaluate current claims about the capabilities of large language models, particularly regarding their purported achievements of theory of mind and self-awareness. We argue that genuinely human-like artificial intelligence may require more than sophisticated pattern recognition and language modeling, potentially necessitating the development of coherent narrative self-models and rich causal understanding. Special attention is given to the relationship between consciousness, conscience, and trustworthy AI systems, suggesting that meaningful artificial intelligence may require forms of richly-embodied and socially-embedded development to achieve robust and reliable functionality. We conclude by proposing that the path to artificial general intelligence may require recapitulating aspects of human cognitive development, particularly regarding the construction of narrative identity and social-moral reasoning capabilities. This analysis has implications for both the technical development of AI systems and the ethical frameworks through which we evaluate artificial minds.
- Research Article
27
- 10.30727/0235-1188-2022-65-1-44-71
- Jun 25, 2022
- Russian Journal of Philosophical Sciences
The article presents grounds for defining the fetish of artificial intelligence (AI). We highlight the fundamental differences of AI from all earlier technological advances, as they are primarily related to its introduction into the human cognitive sphere and generating fundamentally new uncontrollable consequences for society. We provide solid evidence that the leaders of the globalist project are the main beneficiaries of the AI fetish. This is clearly manifested in the works of philosophers who are close to major technology corporations and their mega-projects. We suggest considering the problem of how to use the capabilities of AI to overcome the growing international conflicts and the global crisis. The focus is on the problem of agency, which solution from the standpoint of an anthropomorphic approach to AI is fraught with serious negative consequences. Endowing AI with agency, responsibility is implicitly removed from the person who uses the technology, and the established legislative practice is also destroyed. We present AI as an agent endowed with a set of invariant generalized qualities that is similar to natural subjects. These qualities include: the ability to deliberation, reflexivity, communication and elements of sociability. Such a representation of AI as an agent (pseudo-subject) is consistent with the principle of distributed control in biology and psychology, which was called the principle of a dual subject. In combination with the systems of principles and ontologies specified in the concept of post-nonclassical cybernetics of self-developing environments, this will allow the use of AI as a means of social innovation, while maintaining control over AI technologies. This will also help to pose and solve the problem of integrating formations of artificial and natural intelligence while maintaining the basic qualities of carriers of natural intelligence.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.