Artificial intelligence and corporate diversification: evidence from China

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACT The development of artificial intelligence (AI) has brought significant transformations to corporate management practices and profoundly influenced various business activities. Accordingly, we investigate the impact of AI on corporate diversification strategies. Using data from Chinese listed companies as the research sample, this study finds that AI significantly promotes corporate diversification. This conclusion remains robust after conducting robustness checks and addressing endogeneity issues. Mechanism analysis reveals that AI promotes corporate diversification by enhancing firms’ diversified innovation levels and reducing management costs. Heterogeneity analysis shows that the promotive effect of AI on corporate diversification is more pronounced in non-state-owned enterprises, firms facing higher market competition intensity, and firms with higher customer concentration. Furthermore, we examine whether AI improves firms’ overall resource allocation efficiency and find that AI significantly enhances resource allocation efficiency, thereby offsetting the negative impact of diversification.

Similar Papers
  • Research Article
  • Cite Count Icon 2
  • 10.30884/seh/2024.01.07
The Evolution of Artificial Intelligence: From Assistance to Super Mind of Artificial General Intelligence? Article 1. Information Technology and Artificial Intelligence: The Past, Present and Some Forecasts
  • Mar 30, 2024
  • Social Evolution & History
  • Leonid Grinin + 2 more

The article is devoted to the history of the development of Information and Communication Technologies (ICT) and Artificial Intelligence (AI), their current and probable future achievements, and the problems (which have already arisen, but will become even more acute in the future) associated with the development of these technologies and their active introduction in society. The close connection between the development of AI and cognitive science, the penetration of ICT and AI into various fields, in particular the field of health care, is shown. A significant part of the article is devoted to the analysis of the concept of ‘artificial intelligence’, including the definition of generative AI. We analyze recent achievements in the field of Artificial Intelligence, describe the basic models, in particular the Large Linguistic Models (LLM), and forecast the development of AI and the dangers that await us in the coming decades. We identify the forces behind the aspiration to create artificial intelligence, which is increasingly approaching the capabilities of the so-called general/universal AI, and also suggest desirable measures to limit and channel the development of artificial intelligence. The authors emphasize that the threats and dangers of the development of ICT and AI are particularly aggravated by the monopolization of their development by the state, intelligence services, large corporations and those often referred to as globalists. The article forecasts the development of computers, ICT and AI in the coming decades, and also shows the changes in society that will be associated with them. The study consists of two articles. The first, presented below, provides a brief historical overview and characterizes the current situation in the field of ICT and AI, it also analyzes the concepts of artificial intelligence, including generative AI, changes in the understanding of AI related to the emergence of the so-called large language models and related new types of AI programs (ChatGPT). The article discusses the serious problems and dangers associated with the rapid and uncontrolled development of artificial intelligence. The second article, to be published in the next issue of the journal, describes and comments on current assessments of breakthroughs in the field of AI, analyzes various forecasts, and the authors give their own assessments and forecasts of future developments. Particular attention is given to the problems and dangers associated with the rapid and uncontrolled development of AI, the fact that achievements in the field of AI are becoming a powerful means of controlling the population, imposing ideology and choice, influencing the results of elections, and a weapon for undermining security and geopolitical struggle.

  • Research Article
  • Cite Count Icon 1
  • 10.51702/esoguifd.1583408
Ethical and Theological Problems Related to Artificial Intelligence
  • May 15, 2025
  • Eskişehir Osmangazi Üniversitesi İlahiyat Fakültesi Dergisi
  • Necmi Karslı

Artificial intelligence is defined as the totality of systems and programs that imitate human intelligence and can eventually surpass this intelligence over time. The rapid development of these technologies has raised various ethical debates such as moral responsibility, privacy, bias, respect for human rights, and social impacts. This study examines the technical infrastructure of artificial intelligence, the differences between weak and strong artificial intelligence, ethical issues, and theological dimensions in detail, providing a comprehensive perspective on the role of artificial intelligence in human life and the problems it brings. The historical development of artificial intelligence has been shaped by the contributions of various disciplines such as mathematical logic, cognitive science, philosophy, and engineering. From the ancient Greek philosophers to the present day, thoughts on artificial intelligence have raised deep philosophical questions such as human nature, consciousness, and responsibility. The algorithms developed by Alan Turing have contributed to the modern shaping of artificial intelligence and have put forward the first models to assess whether machines have human-like intelligence, such as the “Turing Test”. The study first analyzes the technical infrastructure of artificial intelligence in detail and discusses the current limits and potential of the technology through the distinction between weak and strong artificial intelligence. Weak artificial intelligence includes systems designed to perform specific tasks and do not exhibit general intelligence outside of those tasks, while strong artificial intelligence refers to systems with human-like general intelligence and flexible thinking capacity. Most of the widely used artificial intelligence applications today fall into the category of weak artificial intelligence. However, the development of strong artificial intelligence brings various ethical and theological consequences for humanity. The ethical issues of artificial intelligence include fundamental topics such as autonomy, responsibility, transparency, fairness, and privacy. The decision-making processes of autonomous systems raise serious ethical questions at the societal level. Especially autonomous weapons and artificial intelligence-managed justice systems raise concerns in terms of human rights and individual freedoms. In this context, the ethical framework of artificial intelligence has deep impacts on the future of humanity and human-machine interaction, not just limited to technological boundaries. From a theological perspective, the ability of artificial intelligence to imitate the human mind and creative processes raises deep theological issues such as the creativity of God, the place of human beings in the universe, and consciousness. The questions of whether artificial intelligence systems can gain consciousness and whether these conscious systems can have a spiritual status have led to new debates in theology and philosophy. The ethical principles of artificial intelligence are shaped around principles such as transparency, accountability, autonomy, human control, and data management. In conclusion, determining the ethical and theological principles that need to be considered in the development and application of artificial intelligence is critical for the future of humanity. A comprehensive examination of the ethical and theological dimensions of artificial intelligence technologies is necessary to understand and manage the social impacts of this technology. This study emphasizes the necessity of an interdisciplinary approach for the development of artificial intelligence in harmony with social values and for the benefit of humanity. The study provides an important theoretical framework for future research by shedding light on the complex ethical and theological issues arising from the development and widespread use of artificial intelligence.

  • Research Article
  • 10.30884/jfio/2023.03.01
Искусственный интеллект: развитие и тревоги. Взгляд в будущее. Статья первая. Информационные технологии и искусственный интеллект: прошлое, настоящее и некоторые прогнозы
  • Sep 30, 2023
  • Философия и общество
  • Леонид Гринин + 2 more

The article is devoted to the history of development of Information and Communication Technologies (ICT) and Artificial Intelligence (AI), their current and probable future achievements and the problems (which have already arisen, but will become even more acute in the future) associated with the development of these technologies and their active introduction in society. The close connection between the development of AI and cognitive science, the penetration of ICT and AI into various fields, in particular the field of health care, is shown. A significant part of the article is devoted to the analysis of the concept of “artificial intelligence”, including the definition of generative AI. There is performed the analysis of recent achievements in the field of Artificial Intelligence, and there are given descriptions of the basic models, in particular Large Linguistic Models (LLM), and forecasts of the development of AI and the dangers that will await us in the coming decades. We identify the forces behind the aspiration to create artificial intelligence, which is increasingly approaching the capabilities of the so-called general/universal AI, and also suggest desirable measures to limit and channel the development of artificial intelligence. The authors emphasize that the threats and dangers of the development of ICT and AI are partuclarly aggrevated by the monopolization of their development by the state, intelligence services, major corporations and those often referred to as globalists. The article forecasts the development of computers, ICT and AI in the coming decades, and also shows the changes in society that will be associated with them. The study consists of two articles. The first, presented below, provides a brief historical overview and characterizes the current situation in the field of ICT and AI, it also analyzes the concepts of artificial intelligence, including generative AI, changes in the understanding of AI in connection with the emergence of the so-called large language models and related new types of AI programs (ChatGPT). The article discusses the serious problems and dangers associated with the rapid and uncontrolled development of artificial intelligence. The second article, to be published in the next issue of the journal, describes and comments on current assessments of breakthroughs in the field of AI, analyzes various forecasts, and the authors give their own assessments and forecasts of future developments. Particular attention is given to the problems and dangers associated with the rapid and uncontrolled development of AI, the fact that achievements in the field of AI are becoming a powerful means of control over the population, imposing ideology and choice, influencing the results of elections, and a weapon for undermining security and geopolitical struggle.

  • Research Article
  • Cite Count Icon 37
  • 10.5204/mcj.3004
ChatGPT Isn't Magic
  • Oct 2, 2023
  • M/C Journal
  • Tama Leaver + 1 more

Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w

  • Preprint Article
  • 10.20944/preprints202501.2099.v1
A Roadmap to Superintelligence: Architectures, Transformations, and Challenges in Modern AI Development
  • Jan 28, 2025
  • Ruslan Idelfonso Magana Vsevolodovna

This paper examines the trajectory of artificial intelligence (AI) development, focusing on three key stages: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI). Recent advancements in AI architectures, particularly the evolution of transformer-based models, have significantly accelerated progress across these stages, enabling more sophisticated and scalable AI systems. This paper explores the architectural foundations of ANI, AGI, and ASI, highlighting recent modifications and their implications for future AI development. Additionally, the societal, ethical, and geopolitical implications of AI are discussed, emphasizing the need for robust safeguards and governance frameworks to ensure that AI serves as a force for human advancement rather than a source of existential risk. By integrating historical comparisons, current trends, and future projections, this paper provides a comprehensive analysis of the transformative potential of AI and its impact on humanity.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.21146/2413-9084-2022-27-2-100-107
Развитие искусственного интеллекта и глобальный кризис земной цивилизации (к анализу социогуманитарных проблем)
  • Jan 1, 2022
  • Philosophy of Science and Technology
  • D.I Dubrovsky

The article considers a qualitatively new stage in the development of artificial intelligence (AI), associated with the development of artificial general intelligence (abbreviated as AGI in the international nomenclature – from Artificial General Intelligence). Unlike traditional AI, AGI is significantly closer in its functions to natural intelligence (EI), it will be able to self-learn, solve a wide range of tasks in different environments, i.e. be integral and autonomous. Such a level of “independence” of AGI opens up fundamentally new prospects for the development of information technologies, but at the same time poses many acute socio-humanitarian problems associated with the risks and threats of losing control over the development of AI. The successful development of AGI requires new theoretical and methodological approaches based on the principles of post-nonclassical epistemology and the results of neuroscientific and phenomenological studies of consciousness. It is very important to consider these issues from the angle of the extreme aggravation of the global crisis of world civilization, due to its consumer dominance and efforts to preserve its monopolar structure from the part of the United States and its Western allies. In this regard, a broader, philosophical-anthropological approach is also required to understand the current state of our civilization and the possibilities for its transformation. It involves taking into account what is called the nature of man, as a stable complex of his mental and bodily properties. They were reproduced among all peoples, in all historical epochs, under all social structures, which indicates their biological conditionality. Among them, along with altruistic properties, a number of negative properties can be distinguished (such as unlimited consumerism, aggressiveness towards one’s own kind, excessive egoistic self-will). These characteristic properties of mass consciousness were actively exploited adherents of monopolarity in their interests. Overcoming the principles and practices of monopolarity and thereby changing the global social self-organization is a necessary condition for a truly humanistic stage of anthropotechnological evolution, capable of opening up new existential prospects for the transformation of man and mankind.

  • Research Article
  • Cite Count Icon 12
  • 10.1108/k-03-2022-0472
Co-evolutionary hybrid intelligence is a key concept for the world intellectualization
  • Oct 17, 2022
  • Kybernetes
  • Kirill Krinkin + 2 more

PurposeThis study aims to show the inconsistency of the approach to the development of artificial intelligence as an independent tool (just one more tool that humans have developed); to describe the logic and concept of intelligence development regardless of its substrate: a human or a machine and to prove that the co-evolutionary hybridization of the machine and human intelligence will make it possible to reach a solution for the problems inaccessible to humanity so far (global climate monitoring and control, pandemics, etc.).Design/methodology/approachThe global trend for artificial intelligence development (has been) was set during the Dartmouth seminar in 1956. The main goal was to define characteristics and research directions for artificial intelligence comparable to or even outperforming human intelligence. It should be able to acquire and create new knowledge in a highly uncertain dynamic environment (the real-world environment is an example) and apply that knowledge to solving practical problems. Nowadays artificial intelligence overperforms human abilities (playing games, speech recognition, search, art generation, extracting patterns from data etc.), but all these examples show that developers have come to a dead end. Narrow artificial intelligence has no connection to real human intelligence and even cannot be successfully used in many cases due to lack of transparency, explainability, computational ineffectiveness and many other limits. A strong artificial intelligence development model can be discussed unrelated to the substrate development of intelligence and its general properties that are inherent in this development. Only then it is to be clarified which part of cognitive functions can be transferred to an artificial medium. The process of development of intelligence (as mutual development (co-development) of human and artificial intelligence) should correspond to the property of increasing cognitive interoperability. The degree of cognitive interoperability is arranged in the same way as the method of measuring the strength of intelligence. It is stronger if knowledge can be transferred between different domains on a higher level of abstraction (Chollet, 2018).FindingsThe key factors behind the development of hybrid intelligence are interoperability – the ability to create a common ontology in the context of the problem being solved, plan and carry out joint activities; co-evolution – ensuring the growth of aggregate intellectual ability without the loss of subjectness by each of the substrates (human, machine). The rate of co-evolution depends on the rate of knowledge interchange and the manufacturability of this process.Research limitations/implicationsResistance to the idea of developing co-evolutionary hybrid intelligence can be expected from agents and developers who have bet on and invested in data-driven artificial intelligence and machine learning.Practical implicationsRevision of the approach to intellectualization through the development of hybrid intelligence methods will help bridge the gap between the developers of specific solutions and those who apply them. Co-evolution of machine intelligence and human intelligence will ensure seamless integration of smart new solutions into the global division of labor and social institutions.Originality/valueThe novelty of the research is connected with a new look at the principles of the development of machine and human intelligence in the co-evolution style. Also new is the statement that the development of intelligence should take place within the framework of integration of the following four domains: global challenges and tasks, concepts (general hybrid intelligence), technologies and products (specific applications that satisfy the needs of the market).

  • Research Article
  • Cite Count Icon 1
  • 10.71364/sfcj3f93
Moral Responsibility in the Development of Artificial Intelligence according to Ethical Theology
  • Feb 27, 2025
  • Journal of the American Institute
  • Kesumawati Kesumawati + 2 more

The development of artificial intelligence (AI) has brought about various significant changes in various sectors of life, including industry, education, and health. However, advances in AI also pose moral and ethical challenges, especially related to transparency, fairness, and accountability in their use. In the context of ethical theology, moral responsibility in the development of AI is an important aspect that needs to be considered to ensure that this technology is developed and applied responsibly in accordance with applicable human values and moral principles. This research aims to examine how the principles of ethical theology can provide a normative foundation in the development of more ethical and responsible AI. The method used in this study is a literature study by analyzing various academic sources, including scientific journals, books, and policy documents that discuss the relationship between AI, morality, and ethical theology. The data collected were then analyzed using the qualitative content analysis method to identify the main findings in this study. The results of the study show that the development of ethical AI requires the integration of moral principles such as justice, love, accountability, and respect for human dignity. Additionally, human regulation and oversight remain necessary to ensure that AI is not used in a way that harms certain individuals or groups. Therefore, the ethical theology approach can be one of the solutions in formulating a more equitable and responsible AI policy.

  • Book Chapter
  • 10.1007/978-981-13-9390-7_6
Retrospect and Prospect of Artificial Intelligence Research in China
  • Nov 20, 2019
  • Jie Tang + 2 more

With the rapid development and application of artificial intelligence (AI), the computer technology has entered the era of new Information Technology (IT) called Intelligent Technology. AI can accelerate the information construction of science and technology. In the past two years, the AI research has been promoted to the level of the national development strategy in China. This chapter explores the origin and development of AI and the AI development in China. AMiner, a big data analysis and service platform for science and technology, is independently developed by China. It is a successful case in the informatization of science and technology in China. Based on the open dataset of AI in AMiner, we give the classification of the AI research in China. We overview the AI research situation in China based on the experts, chapters, and patents analysis. The AI applications, such as speech recognition, face recognition, automatic driving, and so on, are introduced in the chapter. We also discuss the opportunities and challenges of AI in China. In general, this chapter fills the gaps in the authoritative analysis of the AI research situation in China.

  • Research Article
  • Cite Count Icon 1
  • 10.32983/2222-4459-2024-5-118-124
GenAІ – імператив удосконалення інституційного підґрунтя управління ризиками
  • Jan 1, 2024
  • Business Inform
  • Ganna M Kolomiyets + 2 more

The article analyzes the rapid progress in the artificial intelligence sector as one of the most promising areas of information and communication technologies. There is observed an increase in the use of the generative artificial intelligence (GenAI) system in various business areas and a significant increase in investment. The authors focus on the need to implement GenAI in Ukrainian business. At the same time, they emphasize the emergence of negative consequences that arise from the development of artificial intelligence (AI), that in particular can be either accidental or malicious. The importance of risk management in the context of the use of GenAI for effective application in business is emphasized. An analysis of scientific publications in the field of artificial intelligence shows an increasing interest in understanding and analyzing the risks of the development and use of AI. The need for continuous monitoring and development of institutional frameworks for effective AI risk management is underlined, including integrating the efforts of all stakeholders and differentiating efforts at different stages of the development and use of AI. It is noted that the development and use of AI have probable negative consequences, which range from random to deliberately mixed. Sometimes the information generated by AI systems may not be accurate, and sometimes it is biased in nature based on gender, race, and other stereotypes and can be used to facilitate unethical or criminal activities. Some of the inherent risks of AI have already been explored, while others remain unknown. This situation necessitates systematic monitoring of the possible implications of AI development and adoption, as well as the development of appropriate institutional frameworks to assess progress in this area.

  • Research Article
  • Cite Count Icon 29
  • 10.1108/lht-01-2021-0018
Evolutions and trends of artificial intelligence (AI): research, output, influence and competition
  • Jul 22, 2021
  • Library Hi Tech
  • Zhou Shao + 3 more

PurposeThis paper throws light on some of the nature of artificial intelligence (AI) development, which will serve as a starter for helping to advance its development.Design/methodology/approachThis work reveals the evolutions and trends of AI from four dimensions: research, output, influence and competition through leveraging academic knowledge graph with 130,750 AI scholars and 43,746 scholarly articles.FindingsThe authors unearth that the “research convergence” phenomenon becomes more evident in current AI research for scholars' highly similar research interests in different regions. The authors notice that Pareto's principle applies to AI scholars' outputs, and the outputs have been increasing at an explosive rate in the past two decades. The authors discover that top works dominate the AI academia, for they attracted considerable attention. Finally, the authors delve into AI competition, which accelerates technology development, talent flow, and collaboration.Originality/valueThe work aims to throw light on the nature of AI development, which will serve as a starter for helping to advance its development. The work will help us to have a more comprehensive and profound understanding of the evolutions and trends, which bridge the gap between literature research and AI development as well as enlighten the way the authors promote AI development and its strategy formulation.

  • Research Article
  • 10.26689/jera.v3i5.994
A Preliminary Study of the Influence of Artificial Intelligence on Globalization
  • Dec 20, 2019
  • Journal of Electronic Research and Application
  • Chang Wang

The emergence and development of artificial intelligence (AI), in essence, still belong to the category of scientific and technological development. However, unlike previous science and technology, on the one hand, its continuous development will bring about the renewal and iteration of production tools and promote the development of productive forces; on the other hand, its application will affect all aspects of social life, including military, political, economic and so on. With the development of artificial intelligence, countries with the technical advantages of artificial intelligence will build strong technical barriers, which will further widen the gap between countries, and thus will further differentiate. The development and wide application of artificial intelligence have brought new changes to globalization. It is mainly manifested in the unique influence of AI technology, which makes the pattern of international power polarized, further exacerbates the deterioration of the old order, and brings new challenges to globalization. The development of artificial intelligence will have a far-reaching impact on globalization.

  • Research Article
  • 10.34680/beneficium.2024.3(52).60-67
ЭВОЛЮЦИЯ ФИНАНСОВОГО СЕКТОРА ЭКОНОМИКИ В УСЛОВИЯХ ДАВЛЕНИЯ ТЕХНОЛОГИЙ ИСКУССТВЕННОГО ИНТЕЛЛЕКТА
  • Jan 1, 2024
  • Beneficium
  • O.M Makhalina + 1 more

Artificial intelligence, developing actively, is gradually turning into one of the most im-portant technologies of the XXI century and turns out to be the main driver of the country’s eco-nomic development. Currently, the intensive development of artificial intelligence technologies and neurotechnologies has a powerful transformational impact on various spheres of human activity. The article provides an overview of the current trends in the development and improvement of ar-tificial intelligence and examines the process of its active penetration into the financial sector, both considering its various advantages and potential disadvantages, risks and challenges associated with its implementation. The spread of artificial intelligence in the financial sector is avalanche-like in nature and is characterized by the variety, complexity and sensitivity of the problems that arise in this case, requiring an integrated and systematic approach, considering the level of devel-opment of the proposed technological solutions of artificial intelligence. The authors examined the state of development of artificial intelligence, applied technologies that can transform financial relations, considered and evaluated the potential conditions for their development. The article also examines the conditions for the introduction and development of artificial intelligence in the fi-nancial sector, which depend on increasing the availability and quality of data, software, computing power and infrastructure, the degree of development of scientific thought and research, and the level of professional competencies. Artificial intelligence technologies are the basis for the imple-mentation and development of applied financial technologies, therefore, the article examines the prospects for the development of AI, identifies the types of trends, suggests approaches and tools and technologies for their implementation.

  • Research Article
  • Cite Count Icon 3
  • 10.1515/omgc-2024-0041
China’s policies and investments in metaverse and AI development: implications for academic research
  • Jan 30, 2025
  • Online Media and Global Communication
  • Vincenzo De Masi + 3 more

Purpose This study analyzes China’s strategic initiatives in metaverse and artificial intelligence (AI) development, examining their impact on academic research, industry innovation, and policy formulation. It aims to understand how government policies and investments have shaped research agendas and to identify challenges and opportunities in these fields. Design/methodology/approach The research employs a comprehensive analysis of government documents, funding schemes, and research output. It examines key policies, investment programs, and academic publications to track trends in metaverse and AI development in China. The study utilizes bibliometric analysis to assess publication trends, citation patterns, and international collaboration networks. Findings China’s proactive approach, characterized by strong government support and significant private sector investment, has led to a substantial increase in research output and quality in metaverse and AI fields. Chinese institutions have become major contributors to global publications, with growing citation rates and presence at international conferences. The research identifies emerging challenges in privacy, ethical AI development, and digital divide concerns. Practical implications The findings provide insights for policymakers, researchers, and industry stakeholders on the development trajectory of metaverse and AI technologies in China. They highlight the need for balanced approaches to innovation, regulation, and ethical considerations in these rapidly evolving fields. Social implications The study underscores the potential of metaverse and AI technologies to transform various sectors of society, from education and healthcare to entertainment and social interactions. It emphasizes the importance of addressing digital equity and ethical AI deployment to ensure broad societal benefits. Originality/value This research offers a comprehensive overview of China’s approach to metaverse and AI development, providing a unique perspective on the interplay between government initiatives, academic research, and industry innovation. It contributes to the broader discussion on the global development of these transformative technologies and their implications for future technological landscapes.

  • Preprint Article
  • 10.2196/preprints.63895
The prerequisites for artificial intelligence in Danish general practice: A qualitative vignette study among general practitioners (Preprint)
  • Jul 4, 2024
  • Natasha Lee Jørgensen + 5 more

BACKGROUND Artificial intelligence has been deemed revolutionary in medicine, but very few artificial intelligence solutions have been observed in Danish general practice. General practice in Denmark has an excellent system of digitization to develop and utilize artificial intelligence. However, a lack of involvement of general practitioners in the development of artificial intelligence exists. The perspectives of general practitioners as end users are essential to facilitate the development and implementation of artificial intelligence in general practice. OBJECTIVE This study aimed to characterize the prerequisites that must be met to enable the development and implementation of artificial intelligence in Danish general practice. METHODS This study applied semi-structured interviews and vignettes to gain perspectives on the potential for developing and implementing artificial intelligence among general practitioners. Twelve general practitioners were interviewed, resulting in an exhaustive dataset. The interviews were transcribed, and thematic analysis was conducted to identify the dominant themes throughout the data. RESULTS Four main themes were identified in the data analysis as prerequisites that general practitioners found important to consider when developing and implementing AI in general practice: ‘AI must begin with the low-hanging fruit’, ‘AI must be meaningful in the GP’s work’, ‘The GP-patient relationship must be maintained despite AI’, and ‘AI must be a free, active, and integrated option in the EHR’. CONCLUSIONS The four themes contributing to defining prerequisites for artificial intelligence can potentially lead the first steps of future development and implementation of artificial intelligence in Danish general practice. The participating general practitioners were positive towards developing and implementing artificial intelligence in their clinics, and it would be valuable to consider the defined prerequisites when considering new artificial intelligence tools for general practice.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.