The Hegel Test
To become able to test whether machines in fact have human-like or even stronger intelligence than humans, we need to compare machines with a real human highpoint of intelligence. In this article, Hegel’s philosophy is proposed as such a testing criterion, and a test is presented grounded in his dialectical philosophy of negativity, recognition, and labor. Earlier tests have arguably set their criteria too low, thus only being able to detect and examine narrow regions of cognitive abilities. They are therefore only limitedly suited to make judgments on machines’ possible strong AI. In addition, many theories on machine intelligence lack a systematic concept of thinking and intelligence and hence leave the testing of machines with an uncertain ground. With the Hegel test, this article formulates an adequate standard for machines to aim at in order to become human-like cognitively and existentially and subsequently move further towards the greater goal of superintelligence.
- Research Article
- 10.20339/am.02-24.070
- Feb 1, 2024
- Alma mater. Vestnik Vysshey Shkoly
The rapid development of digital intelligence forms the necessity to state the rules of its functioning, since digital intelligence itself goes through certain stages of its development, the so-called maturation. Currently, the use and functioning of artificial intelligence (AI) is practically not regulated anyhow. Ethical difficulties arise: does artificial intelligence have the right to write speeches and reports for political leaders? Will ethical issues arise when artificial intelligence and humans-beings interact? It should be taken into account that not every AI is thinking. In the process of creation and evolution of AI, the concepts of weak and strong artificial intelligence were formed (John Searle). The Turing test was proposed to determine the difference between strong and weak AI, but many weak AI systems have successfully passed it. Definitely, AI is not a person, but it is an essence that can think, develop and form the rudiments of consciousness, perception of itself and the surrounding reality. If we take it as a constant that AI is a personality, then, according to the theory of personality development, the personality should change, mainly its qualitative characteristics. The development of the personality implies the process of changing the systemic qualities of an individual as a result of his interaction with the environment. In the process of this development, consciousness and self-awareness are formed. At this stage, with the exception of physiology and anatomy, the development of a human-being as a personality and AI as a personality are quite similar. However, there is a theory of personality called social robot theories. It suggests that robots, including those with artificial intelligence, may have personality characteristics such as the ability to emotionally connect with people, perception of the environment and the ability to socially communicate. Holistic personality traits appear as a result of the joint functioning of both levels, of all existing constructs among themselves. Based on this interaction, two types of personality can be distinguished: a cognitively complex personality (a personality with a large number of constructs and complex connections between their interactions) and a cognitively simple personality (a personality with a small, simplest set of constructs). If we consider artificial intelligence according to the cognitive theory of personality, then, undoubtedly, there is a correspondence of characteristics that has a different name, but perform the same functions. Also in cognitive theory, as well as in the theory of the development of artificial intelligence, strong and weak personalities are pointed out. However, it is generally accepted that artificial intelligence still does not have a personality, because although it can imitate personality characteristics and communicate with people, this is still only an emulation. Artificial intelligence cannot feel, suffer, behave spontaneously, or demonstrate other traits of personality that are important characteristics of humans-beings.
- Research Article
- 10.60087/jaigs.v6i1.212
- Aug 30, 2024
- Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023
This paper explores the concept of Artificial General Intelligence (AGI), delving into its foundational framework, recent advancements, and future implications. AGI refers to the development of machines with the ability to understand, learn, and apply intelligence across a wide range of tasks, mimicking human cognitive abilities. The paper outlines the theoretical underpinnings of AGI, examining the key challenges and methodologies currently shaping its evolution. It also highlights significant milestones achieved in the field, reflecting on the progress made towards achieving true AGI. Finally, the paper discusses potential future directions, considering the ethical, technical, and societal implications of AGI, as well as the impact it may have on various industries and human life.
- Research Article
5
- 10.46743/2160-3715/2024.6637
- Mar 10, 2024
- The Qualitative Report
Qualitative researchers can benefit from using generative artificial intelligence (GenAI), such as different versions of ChatGPT—GPT-3.5 or GPT-4, Google Bard—now renamed as a Gemini, and Bing Chat—now renamed as a Copilot, in their studies. The scientific community has used artificial intelligence (AI) tools in various ways. However, using GenAI has generated concerns regarding potential research unreliability, bias, and unethical outcomes in GenAI-generated research results. Considering these concerns, the purpose of this commentary is to review the current use of GenAI in qualitative research, including its strengths, limitations, and ethical dilemmas from the perspective of critical appraisal from South Asia, Nepal. I explore the controversy surrounding the proper acknowledgment of GenAI or AI use in qualitative studies and how GenAI can support or challenge qualitative studies. First, I discuss what qualitative researchers need to know about GenAI in their research. Second, I examine how GenAI can be a valuable tool in qualitative research as a co-author, a conversational platform, and a research assistant for enhancing and hindering qualitative studies. Third, I address the ethical issues of using GenAI in qualitative studies. Fourth, I share my perspectives on the future of GenAI in qualitative research. I would like to recognize and record the utilization of GenAI and/or AI alongside my cognitive and evaluative abilities in constructing this critical appraisal. I offer ethical guidance on when and how to appropriately recognize the use of GenAI in qualitative studies. Finally, I offer some remarks on the implications of using GenAI in qualitative studies
- Research Article
- 10.30857/2415-3206.2024.1.7
- Dec 2, 2024
- Management
THE PURPOSE OF THE ARTICLE is to study the theoretical concept of artificial intelligence and its impact on the modernisation of business processes and strategic development of enterprises. RESEARCH METHODS. The article uses the following methods: expert assessments; algorithmic analysis; experimental research; statistical analysis; monitoring and evaluation of results; analysis and synthesis; graphical method, etc. PRESENTING MAIN MATERIAL. Artificial intelligence (AI) is a multidisciplinary scientific concept that has a huge potential for transformation in the field of business process modernisation and strategic development of enterprises. AI is capable of radically modernising business processes and their strategic development by automating decision-making systems and predicting innovations in production based on data analytics. As AI touches upon issues such as ethics, privacy, and cybersecurity, the misuse of AI can have serious negative consequences for users. The main types of AI fall into two broad areas: Weak AI – systems that are capable of performing specific tasks, but without understanding the broader context or the ability to adapt in general, which can outperform humans in certain tasks but do not have true ‘consciousness’; Strong AI – systems that can think, understand and learn at the level of human intelligence (Strong AI is still a hypothetical area of research, as there is no fully implemented Strong AI in modern science). Currently, there are several main approaches to AI development, each of which has its own peculiarities. The main methods are machine learning, expert systems, neural networks, and evolutionary algorithms. AI is becoming a relevant tool for modernising business processes, optimising resources, and developing enterprises strategically. Thus, AI opens up new horizons for business, allowing companies to optimise their business processes, minimise costs and increase efficiency. Integrating AI into the development strategy of enterprises requires a deeper understanding of its potential and limitations, where the introduction of AI changes not only individual business processes but also affects the overall approach to the management and development of enterprises. One of the key areas of AI use in the strategic development of enterprises is forecasting and strategic planning, where AI can help enterprises predict economic trends, analyse the competitive environment, and develop strategies that meet future customer demands. CONCLUSIONS. It has been established that AI has a mega-potential for business transformation, contributing to the modernisation of business processes and strategic development of enterprises. However, to achieve effective results, it is important to be aware of both the advantages and disadvantages associated with its integration. Successful AI integration requires not only technical training, but also the adaptation of business models, a strategic approach, attention to all ethical aspects, investment in innovation, and continuous training of employees. The introduction of AI will allow businesses to optimise their operations, make more informed decisions and adapt to the changing environment. Understanding the importance of AI for businesses allows them not only to adapt to new conditions but also to gain a competitive advantage, as AI is becoming one of the key factors in business development, and its role will only grow in the future. KEYWORDS: artificial intelligence; modernisation of business processes; strategic development of enterprises; business transformation; integration; innovation; enterprises; adaptation; implementation.
- Research Article
108
- 10.1111/bjet.13544
- Dec 10, 2024
- British Journal of Educational Technology
With the continuous development of technological and educational innovation, learners nowadays can obtain a variety of supports from agents such as teachers, peers, education technologies, and recently, generative artificial intelligence such as ChatGPT. In particular, there has been a surge of academic interest in human‐AI collaboration and hybrid intelligence in learning. The concept of hybrid intelligence is still at a nascent stage, and how learners can benefit from a symbiotic relationship with various agents such as AI, human experts and intelligent learning systems is still unknown. The emerging concept of hybrid intelligence also lacks deep insights and understanding of the mechanisms and consequences of hybrid human‐AI learning based on strong empirical research. In order to address this gap, we conducted a randomised experimental study and compared learners' motivations, self‐regulated learning processes and learning performances on a writing task among different groups who had support from different agents, that is, ChatGPT (also referred to as the AI group), chat with a human expert, writing analytics tools, and no extra tool. A total of 117 university students were recruited, and their multi‐channel learning, performance and motivation data were collected and analysed. The results revealed that: (1) learners who received different learning support showed no difference in post‐task intrinsic motivation; (2) there were significant differences in the frequency and sequences of the self‐regulated learning processes among groups; (3) ChatGPT group outperformed in the essay score improvement but their knowledge gain and transfer were not significantly different. Our research found that in the absence of differences in motivation, learners with different supports still exhibited different self‐regulated learning processes, ultimately leading to differentiated performance. What is particularly noteworthy is that AI technologies such as ChatGPT may promote learners' dependence on technology and potentially trigger “metacognitive laziness”. In conclusion, understanding and leveraging the respective strengths and weaknesses of different agents in learning is critical in the field of future hybrid intelligence. Practitioner notes What is already known about this topic Hybrid intelligence, combining human and machine intelligence, aims to augment human capabilities rather than replace them, creating opportunities for more effective lifelong learning and collaboration. Generative AI, such as ChatGPT, has shown potential in enhancing learning by providing immediate feedback, overcoming language barriers and facilitating personalised educational experiences. The effectiveness of AI in educational contexts varies, with some studies highlighting its benefits in improving academic performance and motivation, while others note limitations in its ability to replace human teachers entirely. What this paper adds We conducted a randomised experimental study in the lab setting and compared learners' motivations, self‐regulated learning processes and learning performances among different agent groups (AI, human expert and checklist tools). We found that AI technologies such as ChatGPT may promote learners' dependence on technology and potentially trigger metacognitive "laziness", which can potentially hinder their ability to self‐regulate and engage deeply in learning. We also found that ChatGPT can significantly improve short‐term task performance, but it may not boost intrinsic motivation and knowledge gain and transfer. Implications for practice and/or policy When using AI in learning, learners should focus on deepening their understanding of knowledge and actively engage in metacognitive processes such as evaluation, monitoring, and orientation, rather than blindly following ChatGPT's feedback solely to complete tasks efficiently. When using AI in teaching, teachers should think about which tasks are suitable for learners to complete with the assistance of AI, pay attention to stimulating learners' intrinsic motivations, and develop scaffolding to assist learners in active learning. Researcher should design multi‐task and cross‐context studies in the future to deepen our understanding of how learners could ethically and effectively learn, regulate, collaborate and evolve with AI.
- Research Article
51
- 10.1016/j.metrad.2023.100005
- Jun 1, 2023
- Meta-Radiology
Artificial General Intelligence (AGI) has been a long-standing goal of humanity, with the aim of creating machines capable of performing any intellectual task that humans can do. To achieve this, AGI researchers draw inspiration from the human brain and seek to replicate its principles in intelligent machines. Brain-inspired artificial intelligence is a field that has emerged from this endeavor, combining insights from neuroscience, psychology, and computer science to develop more efficient and powerful AI systems. In this article, we provide a comprehensive overview of brain-inspired AI from the perspective of AGI. We begin with the current progress in brain-inspired AI and its extensive connection with AGI. We then cover the important characteristics for both human intelligence and AGI (e.g., scaling, multimodality, and reasoning). We discuss important technologies toward achieving AGI in current AI systems, such as in-context learning and prompt tuning. We also investigate the evolution of AGI systems from both algorithmic and infrastructural perspectives. Finally, we explore the limitations and future of AGI.
- Research Article
- 10.69968/ijisem.2025v4i2336-350
- Jun 19, 2025
- International Journal of Innovations in Science Engineering And Management
Artificial General Intelligence (AGI) is poised to transform the global workforce, raising hopes and concerns across sectors. Artificial General Intelligence (AGI), defined as AI systems possessing human-level cognitive abilities across a broad range of tasks, stands on the horizon as a potentially transformative force for society. This paper presents a systematic review of over 40 contemporary sources examining Artificial General Intelligence (AGI) and its projected impacts on workforce dynamics. This paper further provides a comprehensive review of the predicted and potential impacts of AGI on the global job market. We analyze key themes including job displacement risks, emerging employment paradigms, and policy considerations in preparation for AGI integration. Drawing upon recent literature, we explore various facets, including job displacement, the emergence of new roles, economic implications such as wage dynamics, and the critical need for workforce adaptation through reskilling and upskilling initiatives. Furthermore, we delve into the societal and ethical considerations surrounding AGI’s development and deployment, including concerns about preparedness, timelines for its arrival, and the imperative for responsible governance. By synthesizing diverse perspectives, this review aims to offer a holistic understanding of how AGI could reshape employment landscapes, urging proactive measures from policymakers, educators, and individuals to navigate this evolving future. The synthesis reveals divergent expert perspectives on both AGI timelines and socioeconomic consequences, highlighting critical gaps in workforce preparedness.
- Research Article
4
- 10.1186/s13063-025-08950-3
- Jul 11, 2025
- Trials
BackgroundThe advancement of generative artificial intelligence (AI) has shown great potential to enhance productivity in many cognitive tasks. However, concerns are raised that the use of generative AI may erode human cognition due to over-reliance. Conversely, others argue that generative AI holds the promise to augment human cognition by automating menial tasks and offering insights that extend one’s cognitive abilities. To better understand the role of generative AI in human cognition, we study how college students use a generative AI tool to support their analytical writing in an educational context. We will examine the effect of using generative AI on cognitive effort, a major aspect of human cognition that reflects the extent of mental resources an individual allocates during the cognitive process. We will also examine the effect on writing performance achieved through the human-generative AI collaboration.MethodsThis study is a randomized controlled lab experiment that compares the effects of using generative AI (intervention group) versus not using it (control group) on cognitive effort and writing performance in an analytical writing task designed as a hypothetical writing class assignment for college students. During the experiment, eye-tracking technology will monitor eye movements and pupil dilation. Functional near-infrared spectroscopy (fNIRS) will collect brain hemodynamic responses. A survey will measure individuals’ perceptions of the writing task and their attitudes on generative AI. We will recruit 160 participants (aged 18–35 years) from a German university where the research will be conducted.DiscussionThis trial aims to establish the causal effects of generative AI on cognitive effort and task performance through a randomized controlled experiment. The findings aim to offer insights for policymakers in regulating generative AI and inform the responsible design and use of generative AI tools.Trial registration.ClinicalTrials.gov NCT06511102. Registered on July 15, 2024. https://clinicaltrials.gov/study/NCT06511102
- Research Article
- 10.5204/lthj.4053
- Nov 18, 2025
- Law, Technology and Humans
The integration of generative artificial intelligence (GenAI) into legal education presents a fundamental paradox: while GenAI efficiently parses legal databases and accelerates research, it struggles to model the normative reasoning and ethical contexts foundational to jurisprudential thought. This article employs a dialectical approach to resolve this tension through a ‘Socratic-GenAI’ framework that reconceptualises GenAI as a whetstone sharpening students’ analytical capacities rather than replacing their critical thinking. Through empirical evidence, including students completing tasks 4.7 times faster yet demonstrating 31 per cent lower performance on cross-doctrinal synthesis, this research shows how GenAI’s limitations become pedagogical resources when deliberately leveraged. The framework operationalises integration through structured contention juxtaposing GenAI and human reasoning, critical interrogation protocols and epistemological transparency. Rejecting binary narratives of adoption or resistance, the article offers a roadmap for interconnectedness between human and machine intelligence, providing a template for evaluating emerging technologies against core jurisprudential values while promoting innovation and sustainability in legal training.
- Research Article
- 10.17853/1994-5639-2025-8-9-34
- Oct 4, 2025
- The Education and science journal
Introduction. The educational activities of students are currently undergoing significant changes due to the active integration of generative artificial intelligence into the learning process. The academic community is concerned that these technologies may not only diminish students’ cognitive abilities but also undermine their role as active participants in educational activities. Consequently, it is important to develop a model of interaction between students and generative artificial intelligence that supports the preservation and enhancement of their agency. This approach will enable the management of the currently spontaneous interactions between students and neural networks, making the process purposeful, controlled, and focused on the personal development of students. Aim. The present study aimed to conduct a systematic analysis of scientific perspectives on student subjectivity and its interaction with generative artificial intelligence. Methodology and research methods. The aim was achieved through the following methods: analytical literature review and systems analysis. Results. It was established that the theoretical basis for developing a model of developmental interaction between students and generative artificial intelligence comprised the fundamental characteristics of interaction as an interdisciplinary category, alongside the qualities of the subject related to autonomy, awareness, and the ability to self-regulate and self-organise activities. Scientific novelty. The interaction of students with generative artificial intelligence is regarded not only as a means to enhance educational outcomes but, above all, as an opportunity to develop the student’s personal qualities. Practical significance. The materials presented in this article may be utilised by practitioners in the fields of pedagogy and educational psychology to develop programmes for the psychological and pedagogical support of students. Furthermore, a theoretical review and systematic analysis of the scientific literature on the concepts of interaction and subjectivity can be beneficial for conducting psychological research and teaching psychology courses.
- Research Article
- 10.1080/23738871.2025.2597194
- May 4, 2025
- Journal of Cyber Policy
The claim that Artificial General Intelligence (AGI) poses a risk of human extinction is largely responsible for the urgency surrounding AI regulation and governance. Underlying these assessments is the idea that AI development may make a computing machine an autonomous, all-powerful actor, and thus a potential threat to humanity. Drawing on perspectives from computer science, economics and philosophy, this paper unpacks the assumptions, evidence and logic underlying the AGI construct. It concludes that AGI is an unscientific myth. Three fallacies underpin the AGI construct: (a) the idea that machine intelligence can achieve a limitless ‘generality’; (b) anthropomorphism, the unwarranted attribution of goals, desires and self-preservation motives to human-built machines; and (c) omnipotence, the assumption that superior calculating intelligence will provide AGI with unlimited physical power. The paper goes on to explain why dispelling the AGI myth is important for public policy. The myth, which still exerts heavy influence on attitudes toward digital governance, diverts attention from the real policy issues posed by the human use of AI applications, and promotes sweeping and potentially authoritarian policy interventions over all forms of information and communication technology.
- Research Article
- 10.21146/2949-3102-2024-2-2-5-17
- Sep 1, 2024
- Otechestvennaya Filosofiya
The article is devoted to the analysis of the concept of artificial general intelligence (AGI) and its interpretation proposed by the Russian philosopher David Dubrovsky in his recent research papers. The first part of the article briefly outlines the current approaches to defining the concept of “artificial general intelligence”, including interpreting it as an artificial intelligent system capable of achieving common goals in a variety of environments. Referring to the texts of the most influential foreign researchers and developers, the author demonstrates the parallels between their proposed approaches to understanding general artificial intelligence and those interpretations proposed by David Dubrovsky and his co-authors. In particular, the commonality in the interpretations of the concept of the “world model” (Yan LeCun) and the concept of “techno-umwelt” is shown, as well as parallels between the hypothesis of “universal embodied AI” (Ben Herzel) and the arguments of the Russian philosopher about the possible implementation of AI through its involvement in various types of interactions with various worlds, virtual and physical. In the second part of the article, the potential of using the information approach developed by David Dubrovsky to solve the mind-body problem as a basis for explaining the phenomenon of general artificial intelligence is outlined. It is shown that despite the need to refine the concept of information causality proposed by the philosopher, his theory can contribute to a better understanding of the connection of possible AGI competencies with the phenomena of subjective reality. In conclusion, the key problems that currently make it difficult to find an answer to the question of the dependence of the qualities of general artificial intelligence on the presence of phenomenal consciousness are outlined. The emphasis is placed on the need to continue interdisciplinary cooperation between representatives of cognitive sciences, developers and philosophers, whose interaction is designed to help solve the characteristic difficulties associated with both the problem of conceptualizing the concept of “artificial general intelligence” and the problem of identifying consciousness in artificial intelligent systems
- Research Article
1
- 10.21209/2227-9245-2020-26-8-69-76
- Jan 1, 2020
- Transbaikal State University Journal
The state policy of artificial intelligence development in Russia is based on the national strategy approved in 2019 and valid until 2030. To understand the specifics of Russian policy, a national strategy was chosen as the object of research, and the subject of research was declared and latent strategic goals. The study is aimed at assessing the degree of correspondence between the strategic goals of state policy and modern concepts of artificial intelligence development. For the automatic analysis of the texts of the national strategy, similar foreign documents and the global array of publications, content analysis was used. The eight largest bibliographic databases have identified many original scientific articles on artificial intelligence. Content analysis of this array made it possible to identify six approaches (algorithmic, test, cognitive, landscape, explanatory and heuristic) to the construction of a concept for the development of artificial intelligence. The latter approach is the most end-to-end, allowing generalizing the rest of the approaches. Further analysis was carried out on the basis of a heuristic approach, within which the concepts of narrow, general and super intelligence are highlighted. The text of the national strategy was analyzed for compliance with the three concepts. It was found that the goals announced in the national strategy refer to the concept of artificial narrow intelligence. Analysis of the frequency of occurrence of terms in the strategy revealed latent goals (access to big data and software) that belong to the same concept. The study of the context of several cases of mentioning artificial general intelligence in the strategy only confirmed the general focus on the development of artificial narrow intelligence. The leading countries in the analyzed area are characterized by a strategic focus on the development of technologies for artificial general intelligence and scientific research on artificial superintelligence. The approximate time lag of the Russian strategy from the creation of artificial general intelligence has been determined. To overcome this lag and Russia occupy a leading position in the world, it was proposed to develop a new national strategy for the creation of artificial superintelligence technologies in the period up to 2050
- Research Article
72
- 10.3390/educsci14020172
- Feb 7, 2024
- Education Sciences
Many educators and professionals in different industries may need to become more familiar with the basic concepts of artificial intelligence (AI) and generative artificial intelligence (Gen-AI). Therefore, this paper aims to introduce some of the basic concepts of AI and Gen-AI. The approach of this explanatory paper is first to introduce some of the underlying concepts, such as artificial intelligence, machine learning, deep learning, artificial neural networks, and large language models (LLMs), that would allow the reader to better understand generative AI. The paper also discusses some of the applications and implications of generative AI on businesses and education, followed by the current challenges associated with generative AI.
- Research Article
- 10.21146/0042-8744-2021-10-175-186
- Jan 1, 2021
- Voprosy Filosofii
The article considers the concept of artificial intelligence (AI) using categories and basic principles of the consciousness theory developed in the Living Ethics (LE). The latter is a modern form of the ancient tradition of exploring consciousness in Indian philosophy and spiritual practice. Categorial apparatus of Indian philosophy contains rich variety of distinction which may be successfully implemented also in the modern cognitive researches. The article shows that more precise definitions of the basic concepts allow to fulfill strict delimitation between “strong” and “weak” AI as well as between what is possible and what completely impossible regarding AI. Strong AI in a sense of possessing “subjective presentations” appears to be impossible. But, a deeper understanding of nature of consciousness in LE allows to move the limits of what is considered as possible for the weak AI. First, LE asserts mechanical mode of operation underlying the most of intellectual operations. Hence, even “weak AI” may fulfill many functions which were before attributed only to “strong AI”. Second, LE defines consciousness and intelligence as inherent inner potential powers of material systems, manifesting also in their ability and tendency to self-organization. Therefore, some features of “artificial” intelligence may be reconsidered as manifestations of intrinsic “intelligence” of matter, which also implies wider possibilities for AI systems. Some parallels appear with major Western philosophers such as G. Bruno, Leibnitz, H. Bergson, E. Husserl, including recent approaches of N. Luhmann’s systems theory and B. Latour’s actor-network theory
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.