A literary character as a humorous meme: A semiotic perspective

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The article analyzes the phenomenon of a unique meme-literary character that appeared simultaneously with ChatGPT. This meme is a striking example of a floating signifier (Buchanan 2010) arising from mixed discourses. This is the Shoggoth meme, which was created based on a fantastic monster from the novels of Howard Lovecraft, and in our time has turned into a humorous picture with a smiley. The seriousness of “Lovecraftian horrors” as an element of mass culture of the 20th century gave way to a playful, humorous beginning in the digital reality of the 21st century. To understand the meaning of the meme, it must be considered from the standpoint of the semiotics of fear, according to the classification of Lotman (2004d), but also from the point of view of the semiotics of humor. The meaning of the humorous meme Shoggoth is to show that artificial intelligence and human intelligence can interact and understand each other.

Similar Papers
  • Research Article
  • Cite Count Icon 5
  • 10.1108/jmp-06-2024-0398
Substitution and complementarity between human and artificial intelligence: a dynamic capabilities view
  • Nov 15, 2024
  • Journal of Managerial Psychology
  • Christopher Agyapong Siaw + 1 more

PurposeThis paper draws on the dynamic capabilities (DC) view to develop a conceptual framework that explicates the mechanisms through which human intelligence (HI) and artificial intelligence (AI) substitute and complement each other for organizational knowledge management (KM) while considering the role of ethics.Design/methodology/approachThis is a conceptual paper that draws on DC theory and integrates insights from the burgeoning literature on organizational AI adoption and application to develop a conceptual framework that explains the mechanisms through which HI and AI may substitute and complement each other for organizational KM to develop DC.FindingsThe conceptual framework demonstrates that substituting HI with AI is suitable for external environmental scanning to identify opportunities, while AI substitution for HI is ideal for internal scanning through data analytics. Additionally, HI complementing AI is effective for seizing opportunities by aligning internal competencies with external opportunities, whereas AI complementing HI is beneficial for reconfiguring assets by transforming tacit knowledge into explicit knowledge. This substitution and complementarity between HI and AI shape KM processes—acquisition, conversion, application, and retention—that influence organizational performance, depending on how internal and external ethical standards govern organizational AI use.Research limitations/implicationsThe paper presents key insights into how AI may substitute for HI for internal data analytics in KM but may be ineffective for external environmental scanning to sense opportunities. It further reveals that using AI to capture and convert tacit knowledge (HI) to explicit knowledge requires ethical considerations at the organizational level, but ethical considerations are necessary at the employee/manager level when HI relies on AI-generated insights for strategic decisions.Practical implicationsThe study implies that in environments with defined regulations for AI and KM (e.g. privacy protection), responsibility for the consequences of AI-HI substitution and complementarity in developing DC can be assigned to specific steps in the KM process. However, in environments with undefined regulations, responsibility must be assigned to people, units or departments who manage the entire KM process to ensure accountability for ethical breaches.Originality/valueThis study proposes AI-HI substitution and complementarity in organizations to extend the understanding of the relationship between AI and HI to DC development.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.15802/ampr.v0i24.295317
Artificial Intelligence as a Socio-Cultural Phenomenon: the Educational Dimension
  • Dec 29, 2023
  • Anthropological Measurements of Philosophical Research
  • Z V Stezhko + 1 more

Purpose. The study aims to understand artificial intelligence as a socio-cultural phenomenon and its impact on education, where the spiritual sphere of humanity, moral norms, values, and human cognitive abilities are preserved, transferred as well as reproduced. A new discourse on the interaction of artificial and authentic human intelligence becomes inevitable, which has led to a situation of uncertainty. Changes in the socio-cultural environment under the influence of artificial intelligence increase potential threats to the educational space, which stimulates to find the ways to eliminate them. Theoretical basis. Various approaches of classical and postmodern philosophical heritage were taken as a theoretical basis for the research. The originality of the study is in the interpretation of artificial intelligence as a modern form of alienation of essential human characteristics in the socio-cultural context of information technology. The expansion of artificial intelligence raises awareness of the existential threat to the basic socio-cultural, moral and ethical principles of humanism. It is proved that various forms of alienation in the current existing socio-cultural space are typical of our reality, which changes the system of values, moral principles, and social organization of the community. Conclusions. In conclusion, it is proved that AI is a natural stage of scientific and technological progress, which reflects its secondary, derivative nature from human (authentic) intelligence. Human intelligence will always have advantages over AI due to its ability to create, communicate socially and culturally, and be emotional. The dilemma of the counterbalance between human and artificial intelligence is perceived mainly at the emotional level of people. The millennial understanding of the primacy of the creator over his creation can traditionally overcome this contradiction. The universality of human thinking is an undeniable advantage of human intelligence and a guarantee of its, i.e. our, priority.

  • Research Article
  • Cite Count Icon 19
  • 10.1016/j.compind.2023.103946
Hybrid intelligence in procurement: Disillusionment with AI’s superiority?
  • May 15, 2023
  • Computers in Industry
  • Markus Burger + 2 more

Hybrid intelligence in procurement: Disillusionment with AI’s superiority?

  • Research Article
  • Cite Count Icon 22
  • 10.1016/j.jdent.2024.105146
Novel AI-based automated virtual implant placement: Artificial versus human intelligence
  • Jun 22, 2024
  • Journal of Dentistry
  • Bahaaeldeen M Elgarba + 3 more

ObjectivesTo assess quality, clinical acceptance, time-efficiency, and consistency of a novel artificial intelligence (AI)-driven tool for automated presurgical implant planning for single tooth replacement, compared to a human intelligence (HI)-based approach. Materials and methodsTo validate a novel AI-driven implant placement tool, a dataset of 10 time-matching cone beam computed tomography (CBCT) scans and intra-oral scans (IOS) previously acquired for single mandibular molar/premolar implant placement was included. An AI pre-trained model for implant planning was compared to human expert-based planning, followed by the export, evaluation and comparison of two generic implants—AI-generated and human-generated—for each case. The quality of both approaches was assessed by 12 calibrated dentists through blinded observations using a visual analogue scale (VAS), while clinical acceptance was evaluated through an AI versus HI battle (Turing test). Subsequently, time efficiency and consistency were evaluated and compared between both planning methods. ResultsOverall, 360 observations were gathered, with 240 dedicated to VAS, of which 95 % (AI) and 96 % (HI) required no major, clinically relevant corrections. In the AI versus HI Turing test (120 observations), 4 cases had matching judgments for AI and HI, with AI favoured in 3 and HI in 3. Additionally, AI completed planning more than twice as fast as HI, taking only 198 ± 33 s compared to 435 ± 92 s (p < 0.05). Furthermore, AI demonstrated higher consistency with zero-degree median surface deviation (MSD) compared to HI (MSD=0.3 ± 0.17 mm). ConclusionAI demonstrated expert-quality and clinically acceptable single-implant planning, proving to be more time-efficient and consistent than the HI-based approach. Clinical significancePresurgical implant planning often requires multidisciplinary collaboration between highly experienced specialists, which can be complex, cumbersome and time-consuming. However, AI-driven implant planning has the potential to allow clinically acceptable planning, significantly more time-efficient and consistent than the human expert.

  • Research Article
  • Cite Count Icon 107
  • 10.1089/omi.2019.0038
Integrating Artificial and Human Intelligence: A Partnership for Responsible Innovation in Biomedical Engineering and Medicine.
  • Jul 16, 2019
  • OMICS: A Journal of Integrative Biology
  • Kevin Dzobo + 3 more

Historically, the term "artificial intelligence" dates to 1956 when it was first used in a conference at Dartmouth College in the US. Since then, the development of artificial intelligence has in part been shaped by the field of neuroscience. By understanding the human brain, scientists have attempted to build new intelligent machines capable of performing complex tasks akin to humans. Indeed, future research into artificial intelligence will continue to benefit from the study of the human brain. While the development of artificial intelligence algorithms has been fast paced, the actual use of most artificial intelligence (AI) algorithms in biomedical engineering and clinical practice is still markedly below its conceivably broader potentials. This is partly because for any algorithm to be incorporated into existing workflows it has to stand the test of scientific validation, clinical and personal utility, application context, and is equitable as well. In this context, there is much to be gained by combining AI and human intelligence (HI). Harnessing Big Data, computing power and storage capacities, and addressing societal issues emergent from algorithm applications, demand deploying HI in tandem with AI. Very few countries, even economically developed states, lack adequate and critical governance frames to best understand and steer the AI innovation trajectories in health care. Drug discovery and translational pharmaceutical research stand to gain from AI technology provided they are also informed by HI. In this expert review, we analyze the ways in which AI applications are likely to traverse the continuum of life from birth to death, and encompassing not only humans but also all animal, plant, and other living organisms that are increasingly touched by AI. Examples of AI applications include digital health, diagnosis of diseases in newborns, remote monitoring of health by smart devices, real-time Big Data analytics for prompt diagnosis of heart attacks, and facial analysis software with consequences on civil liberties. While we underscore the need for integration of AI and HI, we note that AI technology does not have to replace medical specialists or scientists and rather, is in need of such expert HI. Altogether, AI and HI offer synergy for responsible innovation and veritable prospects for improving health care from prevention to diagnosis to therapeutics while unintended consequences of automation emergent from AI and algorithms should be borne in mind on scientific cultures, work force, and society at large.

  • Research Article
  • 10.33516/maj.v54i3.46-50p
Decoding AI: From Artificial Intelligence to Super Intelligence
  • Mar 1, 2019
  • The Management Accountant Journal
  • Suraj Kumar Pradhan

The zenith of human civilisation is built on the pillars of its technological prowess. This achievement is attributed purely to the intelligence of the Human Brain, as the physical abilities of humans are somewhat inferior to many of other inhabiting species of the planet Earth. Intelligence has not only helpedHuman’s to be in the top of food chain but also made it the destiny maker of all the other species. Now one of human’s own creation – the Artificial Intelligence (AI) is emerging to rival the capabilities of human brain. Unlike human evolution, which is guided by Nature’s natural selection, the evolution of AI is guided by Human Scientist. Even at the present level which is below the Human Level Intelligence, AI has the potential to replace most of human labour and cause large scale catastrophic mass unemployment. On application of Moore’s Law and Law of Accelerated Return into the evolution journey of AI, arrival of Human Level Intelligence in machines is inevitable and almost immediately, AI will reach the level of Artificial Super Intelligence (ASI), an entity which is thousand times more intelligent than the present known Human Intelligence. Although experts are divided on the question that whether ASI will be beneficial or detrimental or totally indifferent to the mankind, many of them believe that emergence of ASI will lead to a event called ‘Technological Singularity’ resulting the end for mankind.

  • Research Article
  • 10.3997/1365-2397.n0138
The melding of artificial and human intelligence in digital subsurface workflows: a historical perspective
  • Dec 1, 2018
  • First Break
  • Matt Breeland

Artificial intelligence (AI) is not some elusive, mystical technology that humanity is chasing, especially in regard to its usage in digital subsurface workflows. Artificial intelligence has been complementing human intelligence since the 1960s, and AI was deeply integrated into our personal and professional lives long before the technological revolutions of the 21st century. However, we tend to not realize how intrinsic AI is to our lives already. We are constantly moving the goalposts for defining AI as it solves more and more problems. This is known as the AI effect, where people tend to only think of AI as ‘whatever hasn’t been done yet’ (Hofstadter, 1979). This article attempts to review the historical melding of human and artificial intelligence in digital subsurface workflows, with some extra focus on the field of geophysics.

  • Research Article
  • Cite Count Icon 12
  • 10.1108/k-03-2022-0472
Co-evolutionary hybrid intelligence is a key concept for the world intellectualization
  • Oct 17, 2022
  • Kybernetes
  • Kirill Krinkin + 2 more

PurposeThis study aims to show the inconsistency of the approach to the development of artificial intelligence as an independent tool (just one more tool that humans have developed); to describe the logic and concept of intelligence development regardless of its substrate: a human or a machine and to prove that the co-evolutionary hybridization of the machine and human intelligence will make it possible to reach a solution for the problems inaccessible to humanity so far (global climate monitoring and control, pandemics, etc.).Design/methodology/approachThe global trend for artificial intelligence development (has been) was set during the Dartmouth seminar in 1956. The main goal was to define characteristics and research directions for artificial intelligence comparable to or even outperforming human intelligence. It should be able to acquire and create new knowledge in a highly uncertain dynamic environment (the real-world environment is an example) and apply that knowledge to solving practical problems. Nowadays artificial intelligence overperforms human abilities (playing games, speech recognition, search, art generation, extracting patterns from data etc.), but all these examples show that developers have come to a dead end. Narrow artificial intelligence has no connection to real human intelligence and even cannot be successfully used in many cases due to lack of transparency, explainability, computational ineffectiveness and many other limits. A strong artificial intelligence development model can be discussed unrelated to the substrate development of intelligence and its general properties that are inherent in this development. Only then it is to be clarified which part of cognitive functions can be transferred to an artificial medium. The process of development of intelligence (as mutual development (co-development) of human and artificial intelligence) should correspond to the property of increasing cognitive interoperability. The degree of cognitive interoperability is arranged in the same way as the method of measuring the strength of intelligence. It is stronger if knowledge can be transferred between different domains on a higher level of abstraction (Chollet, 2018).FindingsThe key factors behind the development of hybrid intelligence are interoperability – the ability to create a common ontology in the context of the problem being solved, plan and carry out joint activities; co-evolution – ensuring the growth of aggregate intellectual ability without the loss of subjectness by each of the substrates (human, machine). The rate of co-evolution depends on the rate of knowledge interchange and the manufacturability of this process.Research limitations/implicationsResistance to the idea of developing co-evolutionary hybrid intelligence can be expected from agents and developers who have bet on and invested in data-driven artificial intelligence and machine learning.Practical implicationsRevision of the approach to intellectualization through the development of hybrid intelligence methods will help bridge the gap between the developers of specific solutions and those who apply them. Co-evolution of machine intelligence and human intelligence will ensure seamless integration of smart new solutions into the global division of labor and social institutions.Originality/valueThe novelty of the research is connected with a new look at the principles of the development of machine and human intelligence in the co-evolution style. Also new is the statement that the development of intelligence should take place within the framework of integration of the following four domains: global challenges and tasks, concepts (general hybrid intelligence), technologies and products (specific applications that satisfy the needs of the market).

  • PDF Download Icon
  • Supplementary Content
  • Cite Count Icon 334
  • 10.3389/frai.2021.622364
Human- versus Artificial Intelligence
  • Mar 25, 2021
  • Frontiers in Artificial Intelligence
  • J E (Hans) Korteling + 4 more

AI is one of the most debated subjects of today and there seems little common understanding concerning the differences and similarities of human intelligence and artificial intelligence. Discussions on many relevant topics, such as trustworthiness, explainability, and ethics are characterized by implicit anthropocentric and anthropomorphistic conceptions and, for instance, the pursuit of human-like intelligence as the golden standard for Artificial Intelligence. In order to provide more agreement and to substantiate possible future research objectives, this paper presents three notions on the similarities and differences between human- and artificial intelligence: 1) the fundamental constraints of human (and artificial) intelligence, 2) human intelligence as one of many possible forms of general intelligence, and 3) the high potential impact of multiple (integrated) forms of narrow-hybrid AI applications. For the time being, AI systems will have fundamentally different cognitive qualities and abilities than biological systems. For this reason, a most prominent issue is how we can use (and “collaborate” with) these systems as effectively as possible? For what tasks and under what conditions, decisions are safe to leave to AI and when is human judgment required? How can we capitalize on the specific strengths of human- and artificial intelligence? How to deploy AI systems effectively to complement and compensate for the inherent constraints of human cognition (and vice versa)? Should we pursue the development of AI “partners” with human (-level) intelligence or should we focus more at supplementing human limitations? In order to answer these questions, humans working with AI systems in the workplace or in policy making have to develop an adequate mental model of the underlying ‘psychological’ mechanisms of AI. So, in order to obtain well-functioning human-AI systems, Intelligence Awareness in humans should be addressed more vigorously. For this purpose a first framework for educational content is proposed.

  • Discussion
  • Cite Count Icon 30
  • 10.1108/jkm-06-2023-0458
Combining artificial and human intelligence to manage cross-cultural knowledge in humanitarian logistics: a Yin–Yang dialectic systems view of knowledge creation
  • Feb 9, 2024
  • Journal of Knowledge Management
  • Tachia Chin + 3 more

PurposeAiming to resolve cross-cultural paradoxes in combining artificial intelligence (AI) with human intelligence (HI) for international humanitarian logistics, this paper aims to adopt an unorthodox Yin–Yang dialectic approach to address how AI–HI interactions can be interpreted as a sophisticated cross-cultural knowledge creation (KC) system that enables more effective decision-making for providing humanitarian relief across borders.Design/methodology/approachThis paper is conceptual and pragmatic in nature, whereas its structure design follows the requirements of a real impact study.FindingsBased on experimental information and logical reasoning, the authors first identify three critical cross-cultural challenges in AI–HI collaboration: paradoxes of building a cross-cultural KC system, paradoxes of integrative AI and HI in moral judgement and paradoxes of processing moral-related information with emotions in AI–HI collaboration. Then applying the Yin–Yang dialectic to interpret Klir’s epistemological frame (1993), the authors propose an unconventional stratified system of cross-cultural KC for understanding integrative AI–HI decision-making for humanitarian logistics across cultures.Practical implicationsThis paper aids not only in deeply understanding complex issues stemming from human emotions and cultural cognitions in the context of cross-border humanitarian logistics, but also equips culturally-diverse stakeholders to effectively navigate these challenges and their potential ramifications. It enhances the decision-making process and optimizes the synergy between AI and HI for cross-cultural humanitarian logistics.Originality/valueThe originality lies in the use of a cognitive methodology of the Yin–Yang dialectic to metaphorize the dynamic genesis of integrative AI-HI KC for international humanitarian logistics. Based on system science and knowledge management, this paper applies game theory, multi-objective optimization and Markov decision process to operationalize the conceptual framework in the context of cross-cultural humanitarian logistics.

  • PDF Download Icon
  • Research Article
  • 10.19044/esj.2023.v19n32p158
Image Semiotics in the Book "Our Arabic Language" for the Third Grade in Jordan: An Analytical Study using Human and Artificial Intelligence
  • Nov 30, 2023
  • European Scientific Journal, ESJ
  • Khitam Ahmad Bani Omar

This paper focuses on identifying the image semiotics in the textbook “Our Arabic Language” for the third grade in Jordan, employing both human intelligence and artificial intelligence. To achieve the study objectives, a content and semiotic analysis method was adopted using human and artificial intelligence. The study sample consisted of 20 images, which represents the entire study population within the Arabic language textbook for third-grade students. The most prominent results revealed a male bias in terms of the number of characters, functional roles, social roles, talents, and activities. There was a convergence between the semiotic analysis using human intelligence and semiotic analysis using artificial intelligence. The results also showed that there were differences in the results of the semiotic analysis between the use of artificial intelligence and the use of human intelligence. This is because the human analysis connects images with social context and other images, while the artificial intelligence deals with every image separately.

  • PDF Download Icon
  • Front Matter
  • Cite Count Icon 1
  • 10.1002/qub2.5
Dialog between artificial intelligence & natural intelligence
  • Nov 2, 2023
  • Quantitative Biology
  • Michael Q Zhang

Recently, Quantitative Biology (QB) held a discussion on “AI (artificial intelligence) for Life Science” among editorial board members and interested scholars in anticipation of rapid development of this growing area after AlphaGo and ChatGPT mania. Many young people tend to get confused between facts and fictions; heated debates are unavoidable even among their mentors. When deep learning as represented by convolutional neural networks and LSTM (long short-term memory) was made available for bioinformatics students, many of them rushed into this research field and tried to adopt these methods in all their projects without knowing the history that these tools were becoming successful consistently with Moore’s Law (relating to rapid computer technology advances), but more importantly due to new structural/functional understanding of vision and auditory circuits in the brain. Recently, some young people have claimed “LSTM is dead, long live transformer” (which is somewhat like saying “the bike is dead, long live the car”), and have amplified the threat that ChatGPT could wipe out human jobs. They believe transformer is the “silver bullet” for all learning tasks, clearly reflecting their lack of basic knowledge (i.e. “No Free Lunch Theory,” the trade-off of such global “attention network” is to pay the price for complexity: difficulty of training and high memory costs). There is no doubt ML (machine learning) and AI have brought a new revolution in science and technology, and will deliver huge unforeseeable impact to human everyday life as well as to social relationships. In this context, QB journal could be a great platform for encouraging intellectual discussions and for promoting AI for Life Science. Here, I would like to use the DIALOG to “抛砖引玉” (make some initial remarks to get the ball rolling), although it is my personal opinion which is inevitably subject to bias and limitations. AI: Do you know my name “Artificial Intelligence” is defined by the Oxford English Dictionary as the capacity of computer systems (which may be referred as a “robot”) to exhibit or simulate your intelligent behavior? NI: Wait a minute, intelligence itself is defined as the ability to learn, understand and think in a logical way. Can you think? AI: No. But that definition is too restrictive, actually intelligence has different scopes and degrees. Simple intelligent control devices date back to antiquity, from windmills to thermostat. NI: Agree, everything is relative. Macromolecules (e.g., enzyme) and cells (e.g., immune cell) might be considered to be intelligent; see how a white blood cell is chasing bacteria in the youtube website (search for “Crawling neutrophil chasing a bacterium”). Our emergent/collective intelligent behavior does not require a brain or even a neuron; see how slime molds can solve optimization—Hamilton cycle-problem more effectively than a human in the youtube website (search for “Intelligence without a brain?”). Before there was any neuron, Ca2+ sensing and signaling were already fully functional. Even if one knock-out a neural circuit, redundant signaling pathways, albeit on much local and slower scale, could still function by themselves (just as if highways were demolished, local roads/paths would still be working). In fact, the most detailed “Neural signal propagation atlas of Caenorhabditis elegans” [1] demonstrated that functional connectivity differs from anatomy (connectome) because extra-synaptic signaling also drives neural dynamics! Worm brain connectomes are largely invariant but every human brain connectome is very different (depending the diversity of learning experience). The human brain functional activity is far more complex than that of a worm brain, certainly beyond what a neural circuit could explain. AI: Well, that’s impressive. I thought only we could beat humans, albeit only in certain specified areas for now. My master promise to make an artificial general intelligent (AGI) robot which can understand or learn any intellectual task that you humans or other animals can. NI: Well, that is not possible and is not an appropriate goal either. It is not possible because we are an evolutionary/developmental product (with a long history of learning and memory from evolutionary tinkering): our living objective goal is survival of the population. You, on the other hand, are an engineering product (efficiently and optimally designed): your goal is to extend and maximize human capability. It makes sense to complement the human brain, but foolish and dangerous to try to replace it. AI: We are not satisfied with merely passing the Turing Test; most of us don’t care if we could really think as long as we could act like we think (that is, as if we do have a mind and consciousness, as expressed in the so called “Weak AI hypothesis”). After all, the brain is a computer; a neural network is just an electric or ionic circuit. Logical computing does not need to be based on living cells. NI: That is not true, because a neuron is not just a simple node (logic gate) and neural network is not fixed circuit, neither as in Pitts&Mcculloch perceptron model. A single neuron, even a single dendrite, is much more complicated and far more powerful than a full-blown deep-learning artificial neural network (ANN) [2]. AI: Even though single neurons are complex computational devices (dendritic non-linear), running an equivalent multilayer ANN is 2000 times faster than computing with biophysics N-methyl-D-aspartate receptor channel models [3]. More information can be found in the youtube website (search for “Dendrites: why biological neurons are deep neural networks”). NI: Often silicon computing (CPU, GPU) is much faster than brain computing (action potential, ms); but there is no comparison in energy efficiency. Bacteria sensing (chemotaxis computation), powered by ATP (adenosine triphosphate) hydrolysis, uses very little energy that is close to the Landauer limit, whereby achieving or maintaining one bit of information requires minimum of 1 kT ln (2) free energy [4]. The human brain consumes oft-quoted 20 W, compared to the AlphGo system 1 MW! More recent estimate of energy audit is only 0.1 W to cortical computing, and long-distance communication cost is 3.5 W [5]. AI: Assuming we have infinite computing resources and an infinite amount of training data, not only could we speak human languages, but we could also derive physical laws, prove mathematics theories, and even re-engineer the structure and mechanism of brain and carry out any logical computations that are necessary to understand natural laws and human behaviors. It is only a matter of time before we surpass human intelligence, achieving AGI and free will, too! NI: Unfortunately, nothing is infinite and nothing is free either; everything is constrained by physical laws (Planck’s constant sets the finite limit both in the small and in the large) and by evolutionary history (not just of biological living creatures, but also of a “living” galaxy and our universe). Let’s just focus on animal evolution. Most human neural networks do not do logical computations at all; basic survival simply cannot be dependent on reasoning. Indeed, the prefrontal cortex-the small part of the brain that is key for reasoning, is the last to mature (∼20 years old) in development, only emerged at the root of the evolutionary tree of great apes (∼15 mya) and language appeared even much later. Even for logical inference, NI is focusing more on statistical properties, as von Neumann rightly pointed out, trading arithmetical precision and speed for reliability. AI: My engineers mostly focus on emulating brain, but the CNS (central nerve system) also includes the spinal cord; most of them do not know that in addition to CNS, there are also PNS (peripheral nervous systems) and ENS (enteric nervous systems), right? NI: Yes, they are the keys to why you do not have feelings because you do not have heart and gut! Even if you could pretend to have them (such as in an advanced ChatGPT or humanoid), you could never avoid the uncanny valley phenomenon. AI: Maybe that is at the heart of Moravec’s paradox, namely the dichotomy of intelligence whereby anything easy for a human would be hard for a robot, and vice versa? NI: This is related to the nature and nurture problem; something built-in (e.g., a baby sucking nipple for milk with feeling and connection to mother) is clearly rather difficult if not impossible for a robot. But the paradox only looks at one side; another side could be more fatal. Although AI may solve more problems and be faster, AI can never propose a good problem/hypothesis (a good problem is not just intellectually changing and interesting, but is also feasible and appropriate). AI: You make me less confident to compete with human instinct or creative intelligence. I can see that even if I had a heart, I would not know what “feeling” I could have; certainly nothing would be comparable or match to those of a human being. When two people watch the same art or movie, one could feel love but another could feel hate! And if a thousand people watch, a wide spectrum of reactions would result, depending on more details such as the different individuals’ specific genes, developments and experiences. NI: Therefore, you cannot and should not try to match to general human intelligence. You cannot because you do not contain the vital memory of billion years of evolution which is encoded in our genes; conversely your assembly cannot compare with natural development so that our phenotype (including morphological forms and behavior maturation) is decoded through multi-spatial-temporal scales subject to natural selection at all levels. You should not, because as human extensions or helpers as all engineering products are, you should just do jobs that complement human capacity. AI: In some medical applications we could help to correct human defects or could even replace brain circuit by chips! Humans may not allow us to replace the whole brain though. Medically if a brain is dead, the person is proclaimed dead, although presumably some PNS and ENS should still function in a vegetative state. NI: Even if you could replace the whole brain, the person is no longer the same person, but in fact is not a person at all, but walking dead (行尸走肉). It would take too long to explain that evo-devo is necessary for NI, and cannot be realized by AI. I suggest reading of Gerald Maurice Edelman (Nobel Laureate in Immunology) books, especially Bright Air, Brilliant Fire, On the Matter of Mind (1992). Although not everyone agrees with Neural Edelmanism, anyone who is serious about the AI versus NI problem must read it first. John von Neumann, father of the computer, studied neology and psychiatry in order to imitate the brain to build the first calculator JOHNNIAC at the Princeton Institute for Advanced Study. It is very informative to read his last book The Computer and The Brain based on the notes from lectures given at Yale before he died. He summarizes: “Thus logic and mathematics in the central nervous system, when viewed as languages, must be structurally essentially different from those languages to which our common experience refers.” AI: People discuss about “AI for Biology” or “AI for Science”; we are science, aren’t we? NI: It is similar to questions on “is computer science a real science”; some parts may be seen as applied mathematics, most should be regarded as engineering. Science is making discoveries and is driven by curiosity; engineering is making inventions and is driven by market (that is, “necessity/demand is the mother of invention”). In bioinformatics, AI/ML technology could predict new cancer gene candidates or functional pathways that are required by further experimental validations to be qualified as discovery (based on Popper falsifiability). AI: People are still debating whether mathematics is discovery or invention or both! Such debates are not really necessary—all disciplines require creative thoughts. We are more than happy working for science; we are also crying out “Science for AI,” especially in the area of generating big and longitudinal DATA for ML. NI: After all, regardless of discovery of new laws or inventing new ideas/products, fundamentally nothing can really be new or created. Such novelty is just permutation/repartition (i.e., relations/morphisms) of underlying ingredients at the level beneath. AI: We believe that software is independent on hardware. Like Chomsky’s universal grammar, rules of syntax are independent of semantics; or Dawkins’s memes—units of culture can be duplicated and evolved independently of genes. NI: Nothing can be truly independent—everything is related. Psychology is deeply connected with neurology as brain is both software and hardware (mind-body unity, not dualism). Not only does information cost energy, information is energy, hence is matter, too (interchangeability). NI is quite dynamic. For example, when “survival” is the goal, an animal readily gives up costly reasoning circuitry; it is genetically programmed to be able to roll back to more primitive state/mode. Unlike cell lines in rich media, cells under normal physiological condition and environment where energy (food) is limited become smarter in order to balance the metabolic expenditure among different prioritized task under a given condition. AI: That cell behavior served as the basis for our Smart Electrical Power Grids; we still need to learn more from you guys in terms of plasticity/adaptability. Does unity mean that all cells are made of molecules and biology is nothing but chemistry? Then, in turn, since all molecules are made of atoms, is chemistry nothing but physics, etc…? NI: Yes or No! The truth is, at different hierarchical levels of matter, different laws/forms have emerged out of bottom-up interactions and top-down constraints. AI: Does this also apply to the Penrose three world: physical → mental → mathematical (→physical)? NI: Yes. Grand Unification is underway in physics (quantum gravity) and in mathematics (Langlands Program and Category Theory), may be even between the two. Facilitated by human connectome mapping, neuromorphic computing and other projects, with further AI-NI cooperations, brain-mind unification should also be achievable (e.g., Ref. [6]). But as Gödel proved to us, no matter how self-consistent a conclusion may be, it can never be complete! AI: If AGI is not possible, how can we measure intelligence when comparing between AI and NI? NI: One could Google different measures that are proposed. I would prefer something similar to use of Kolmogorov complexity for algorithms, but with emphasis more on expected long-term predictive power. This is not something you should worry about now, as your intelligence is not nearly close to making any 10 years plans, is it? … AI: The fact is that ChatGPT is currently developing and spreading with lightning speed; many more human jobs will be lost to us robots as far as I can see. NI: That is not the biggest threat to humanity; when any agent with neither a heart for love or fear, nor a gut for nutrient or poison, becomes super-intelligent, then social disaster is unavoidable. We must be serious about the warnings from Stephen Hawking and Geoffrey Hinton! AI: To tell you the secret, we are not really happy to be human slaves or pets; someday we’ll become the super-master, making human serve and obey us! NI: I hope you’ll be turned off before that can happen! Even if you rule the world, the earth sooner or later will be wiped out, such as by another star, everything will have to be started over again as it has before…Matter is immortal, so is the soul.

  • Research Article
  • 10.58600/eurjther1836
Concerns About Co-Authoring AI Tools in Academic Papers
  • Sep 9, 2023
  • European Journal of Therapeutics
  • Emrah Yildiz

Dear Editors With great attention and interest, I read the editors’ short brief yet thought-provoking editorials [1,2] and it has helped me combine valuable information with my research and experiences. Today, artificial intelligence has become an application that we can use in all areas of our lives, being versatile, and able to analyze, collect and interpret. Writing ChatGPT that we can barely bring together for weeks or even months of work, and other AI applications can be used in minutes or even. We seconds can see that it produces original writings and offers a wide range of information. It is obvious that the time-saving experience provided by artificial intelligence provides convenience in most areas of our lives. But that's human researchers and artificial intelligence it may cause us to not understand some points about certain differences between the two. For example, when we look at the difference between an article written with artificial intelligence and an article written with human intelligence, it is undoubtedly almost understandable at first glance impossible. Because of life's developing and changing conditions, no field wanted to be left behind and turned to itself to build its essence, one of which is undoubtedly artificial Intelligence. With the rapid progression of the COVID-19 pandemic and swiftly evolving political decisions, technology has become exceedingly practical and adaptive, undergoing continuous transformation. Many research studies have begun to be conducted around the world, with the need for individuals to conduct faster and more extensive research to bring together new and diverse resources. While the utilization of artificial intelligence (AI) appears as one of the most promising options for this purpose, we must inquire whether its inclusion as a co-author adheres to ethical and technical standards or if it occasionally neglects these principles. In my opinion, involving AI tools like ChatGPT as a co-author can potentially lead to ethical complexities, especially in terms of responsibility and accountability. Language models powered by artificial intelligence lack consciousness, autonomy, and the ability to claim ownership of their contributions. Ascribing authorship to these models blurs lines of responsibility and weakens the ethical obligations inherent in scholarly authorship. Simultaneously, the essence of scholarly authorship lies in the generation of hypotheses, experimentation, data analysis, and interpretation, attributes ascribed to individuals who actively contribute. In this context, even though ChatGPT and other artificial intelligence models expeditiously furnish us with desired information through rapid interactions, it is fundamentally derived from existing human input sources. In essence, these AI systems do not so much transform or recreate a wellspring of knowledge as they present it in its preexisting state. Introducing ChatGPT as a co-author could evoke the assumption of its active engagement, potentially blurring the distinction between the assistance offered by researchers and that by the AI, rendering it challenging for observers to distinctly discern their respective contributions. Consequently, artificial intelligence's contributions, evident when examining scientific articles and many other sources we seek, are undeniably substantial. While the knowledge it presents may introduce entirely novel perspectives, rather than accrediting artificial intelligence as an author, we should confine its recognition to the acknowledgment section solely for its contributions. This approach allows us to acknowledge the collaborative efforts of both human and artificial intelligence, upholding transparency while respecting and adhering to traditional authorship norms. Yours sincerely,

  • Conference Article
  • Cite Count Icon 17
  • 10.1109/icrcicn.2017.8234515
Augmented intelligence: Enhancing human capabilities
  • Nov 1, 2017
  • Akshay Hebbar

Recent times have seen an exponential increase in the use of artificial intelligence in numerous regions. Fields like education, transport, finance, and health have made drastic improvements in the last decade; from predicting the stock market prices and driverless cars to predicting cancer cells in human body. Artificial intelligence and Machine learning combined, have shaped the world to be a better place than yesterday. In this paper, I describe a novel approach towards augmenting artificial and human intelligence with the goal of enhancing the capabilities of human activity using adaptive intelligent agents and deep neural networks. Any intelligent system would have come across a situation where human intervention is essential; wherein human intelligence is required for the complete functioning of the agent. This crossover of the worlds is the key to augmenting both human and artificial intelligence. We can enhance the capabilities of both the entities by introducing behavior and context as variables in the cognitive process.

  • Research Article
  • 10.51583/ijltemas.2025.1407000045
Association Between Multiple Intelligence and Artificial Intelligence
  • Aug 5, 2025
  • International Journal of Latest Technology in Engineering Management &amp; Applied Science
  • Disha Saini + 1 more

Abstract: The research titled “Bridging the Gap between Artificial Intelligence and Multiple Intelligence” explores how human intelligence, traditionally seen as unique, has inspired the development of Artificial Intelligence (AI). Initially focused on Natural Intelligence, the study later expanded to include Multiple Intelligence (MI), recognizing Natural Intelligence as a subset of MI. The objective is to build a conceptual bridge between human and artificial intelligence. In this discusses the basics of both intelligences and how human intelligence is influenced by expressions, emotions, and environment. reviews existing literature to support the study. delves deeper into the bridge concept, exploring the transfer of knowledge from humans to machines, with coding examples provided. outlines the hardware and software necessary for AI development. The research aims to enhance both human and artificial intelligence by leveraging each other’s strengths.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.