Ethical and Theological Problems Related to Artificial Intelligence
Artificial intelligence is defined as the totality of systems and programs that imitate human intelligence and can eventually surpass this intelligence over time. The rapid development of these technologies has raised various ethical debates such as moral responsibility, privacy, bias, respect for human rights, and social impacts. This study examines the technical infrastructure of artificial intelligence, the differences between weak and strong artificial intelligence, ethical issues, and theological dimensions in detail, providing a comprehensive perspective on the role of artificial intelligence in human life and the problems it brings. The historical development of artificial intelligence has been shaped by the contributions of various disciplines such as mathematical logic, cognitive science, philosophy, and engineering. From the ancient Greek philosophers to the present day, thoughts on artificial intelligence have raised deep philosophical questions such as human nature, consciousness, and responsibility. The algorithms developed by Alan Turing have contributed to the modern shaping of artificial intelligence and have put forward the first models to assess whether machines have human-like intelligence, such as the “Turing Test”. The study first analyzes the technical infrastructure of artificial intelligence in detail and discusses the current limits and potential of the technology through the distinction between weak and strong artificial intelligence. Weak artificial intelligence includes systems designed to perform specific tasks and do not exhibit general intelligence outside of those tasks, while strong artificial intelligence refers to systems with human-like general intelligence and flexible thinking capacity. Most of the widely used artificial intelligence applications today fall into the category of weak artificial intelligence. However, the development of strong artificial intelligence brings various ethical and theological consequences for humanity. The ethical issues of artificial intelligence include fundamental topics such as autonomy, responsibility, transparency, fairness, and privacy. The decision-making processes of autonomous systems raise serious ethical questions at the societal level. Especially autonomous weapons and artificial intelligence-managed justice systems raise concerns in terms of human rights and individual freedoms. In this context, the ethical framework of artificial intelligence has deep impacts on the future of humanity and human-machine interaction, not just limited to technological boundaries. From a theological perspective, the ability of artificial intelligence to imitate the human mind and creative processes raises deep theological issues such as the creativity of God, the place of human beings in the universe, and consciousness. The questions of whether artificial intelligence systems can gain consciousness and whether these conscious systems can have a spiritual status have led to new debates in theology and philosophy. The ethical principles of artificial intelligence are shaped around principles such as transparency, accountability, autonomy, human control, and data management. In conclusion, determining the ethical and theological principles that need to be considered in the development and application of artificial intelligence is critical for the future of humanity. A comprehensive examination of the ethical and theological dimensions of artificial intelligence technologies is necessary to understand and manage the social impacts of this technology. This study emphasizes the necessity of an interdisciplinary approach for the development of artificial intelligence in harmony with social values and for the benefit of humanity. The study provides an important theoretical framework for future research by shedding light on the complex ethical and theological issues arising from the development and widespread use of artificial intelligence.
- Research Article
34
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w
- Research Article
2
- 10.30884/seh/2024.01.07
- Mar 30, 2024
- Social Evolution & History
The article is devoted to the history of the development of Information and Communication Technologies (ICT) and Artificial Intelligence (AI), their current and probable future achievements, and the problems (which have already arisen, but will become even more acute in the future) associated with the development of these technologies and their active introduction in society. The close connection between the development of AI and cognitive science, the penetration of ICT and AI into various fields, in particular the field of health care, is shown. A significant part of the article is devoted to the analysis of the concept of ‘artificial intelligence’, including the definition of generative AI. We analyze recent achievements in the field of Artificial Intelligence, describe the basic models, in particular the Large Linguistic Models (LLM), and forecast the development of AI and the dangers that await us in the coming decades. We identify the forces behind the aspiration to create artificial intelligence, which is increasingly approaching the capabilities of the so-called general/universal AI, and also suggest desirable measures to limit and channel the development of artificial intelligence. The authors emphasize that the threats and dangers of the development of ICT and AI are particularly aggravated by the monopolization of their development by the state, intelligence services, large corporations and those often referred to as globalists. The article forecasts the development of computers, ICT and AI in the coming decades, and also shows the changes in society that will be associated with them. The study consists of two articles. The first, presented below, provides a brief historical overview and characterizes the current situation in the field of ICT and AI, it also analyzes the concepts of artificial intelligence, including generative AI, changes in the understanding of AI related to the emergence of the so-called large language models and related new types of AI programs (ChatGPT). The article discusses the serious problems and dangers associated with the rapid and uncontrolled development of artificial intelligence. The second article, to be published in the next issue of the journal, describes and comments on current assessments of breakthroughs in the field of AI, analyzes various forecasts, and the authors give their own assessments and forecasts of future developments. Particular attention is given to the problems and dangers associated with the rapid and uncontrolled development of AI, the fact that achievements in the field of AI are becoming a powerful means of controlling the population, imposing ideology and choice, influencing the results of elections, and a weapon for undermining security and geopolitical struggle.
- Conference Article
5
- 10.1109/taai48200.2019.8959925
- Nov 1, 2019
With the rapid development and application of artificial intelligence (AI) in various fields, the threats and ethical issues to human beings are getting more attentions. Therefore, the study of the artificial intelligence ethics is increasingly important. This study focuses on the ethical issues in the development of artificial intelligence, the balance between artificial intelligence, human, and business ethics. Several AI ethical paradoxes will be provided and discussed in this paper. In the end, a framework of ethical design approach will be presented for future studies.
- Research Article
- 10.30884/jfio/2023.03.01
- Sep 30, 2023
- Философия и общество
The article is devoted to the history of development of Information and Communication Technologies (ICT) and Artificial Intelligence (AI), their current and probable future achievements and the problems (which have already arisen, but will become even more acute in the future) associated with the development of these technologies and their active introduction in society. The close connection between the development of AI and cognitive science, the penetration of ICT and AI into various fields, in particular the field of health care, is shown. A significant part of the article is devoted to the analysis of the concept of “artificial intelligence”, including the definition of generative AI. There is performed the analysis of recent achievements in the field of Artificial Intelligence, and there are given descriptions of the basic models, in particular Large Linguistic Models (LLM), and forecasts of the development of AI and the dangers that will await us in the coming decades. We identify the forces behind the aspiration to create artificial intelligence, which is increasingly approaching the capabilities of the so-called general/universal AI, and also suggest desirable measures to limit and channel the development of artificial intelligence. The authors emphasize that the threats and dangers of the development of ICT and AI are partuclarly aggrevated by the monopolization of their development by the state, intelligence services, major corporations and those often referred to as globalists. The article forecasts the development of computers, ICT and AI in the coming decades, and also shows the changes in society that will be associated with them. The study consists of two articles. The first, presented below, provides a brief historical overview and characterizes the current situation in the field of ICT and AI, it also analyzes the concepts of artificial intelligence, including generative AI, changes in the understanding of AI in connection with the emergence of the so-called large language models and related new types of AI programs (ChatGPT). The article discusses the serious problems and dangers associated with the rapid and uncontrolled development of artificial intelligence. The second article, to be published in the next issue of the journal, describes and comments on current assessments of breakthroughs in the field of AI, analyzes various forecasts, and the authors give their own assessments and forecasts of future developments. Particular attention is given to the problems and dangers associated with the rapid and uncontrolled development of AI, the fact that achievements in the field of AI are becoming a powerful means of control over the population, imposing ideology and choice, influencing the results of elections, and a weapon for undermining security and geopolitical struggle.
- Research Article
14
- 10.1177/20539517231221780
- Jan 8, 2024
- Big Data & Society
While tech workers are essential stakeholders in ethical artificial intelligence (AI) development and deployment, they are rarely consulted about their understanding of the development of ethical AI. In light of this, we present the findings of our 2020 to 2021 empirical research study in which we collected data from tech workers in a major AI company to better understand what they consider to be the most pressing ethical issues when developing AI-powered products. While there is a nascent body of literature that examines how AI ethics principles are operationalised on the ground, this study differs in that we explicitly draw on feminist insights to inform our analysis, and have put a particular focus on allowing the voices and narratives of tech workers to lead the work forward. Our study generated three main findings: first, the term ‘bias’ creates real confusion among tech workers, meaning that the term is unable to do the ethical work it is intended to do; second, tech workers do not necessarily see a relationship between diversity, equality and inclusion (DEI) agendas and AI development, undermining AI ethics initiatives; and third, tech workers were particularly concerned about the monitoring and maintenance of unwieldy ‘legacy systems’ that generated serious challenges to creating and deploying new and more ethical AI products. This study thus creates a ‘thicker’ and more nuanced picture of tech workers’ perspectives on the ethical issues that arise when developing and maintaining AI systems, while simultaneously demonstrating the utility of feminist approaches in the field of AI ethics.
- Research Article
1
- 10.32983/2222-4459-2024-5-118-124
- Jan 1, 2024
- Business Inform
The article analyzes the rapid progress in the artificial intelligence sector as one of the most promising areas of information and communication technologies. There is observed an increase in the use of the generative artificial intelligence (GenAI) system in various business areas and a significant increase in investment. The authors focus on the need to implement GenAI in Ukrainian business. At the same time, they emphasize the emergence of negative consequences that arise from the development of artificial intelligence (AI), that in particular can be either accidental or malicious. The importance of risk management in the context of the use of GenAI for effective application in business is emphasized. An analysis of scientific publications in the field of artificial intelligence shows an increasing interest in understanding and analyzing the risks of the development and use of AI. The need for continuous monitoring and development of institutional frameworks for effective AI risk management is underlined, including integrating the efforts of all stakeholders and differentiating efforts at different stages of the development and use of AI. It is noted that the development and use of AI have probable negative consequences, which range from random to deliberately mixed. Sometimes the information generated by AI systems may not be accurate, and sometimes it is biased in nature based on gender, race, and other stereotypes and can be used to facilitate unethical or criminal activities. Some of the inherent risks of AI have already been explored, while others remain unknown. This situation necessitates systematic monitoring of the possible implications of AI development and adoption, as well as the development of appropriate institutional frameworks to assess progress in this area.
- Research Article
6
- 10.2139/ssrn.3873097
- Jan 1, 2021
- SSRN Electronic Journal
Artificial Intelligence and Corporate Social Responsibility: Employees’ Key Role in Driving Responsible Artificial Intelligence at Big Tech
- Preprint Article
- 10.20944/preprints202501.2099.v1
- Jan 28, 2025
This paper examines the trajectory of artificial intelligence (AI) development, focusing on three key stages: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI). Recent advancements in AI architectures, particularly the evolution of transformer-based models, have significantly accelerated progress across these stages, enabling more sophisticated and scalable AI systems. This paper explores the architectural foundations of ANI, AGI, and ASI, highlighting recent modifications and their implications for future AI development. Additionally, the societal, ethical, and geopolitical implications of AI are discussed, emphasizing the need for robust safeguards and governance frameworks to ensure that AI serves as a force for human advancement rather than a source of existential risk. By integrating historical comparisons, current trends, and future projections, this paper provides a comprehensive analysis of the transformative potential of AI and its impact on humanity.
- Research Article
- 10.32840/cpu2219-8741/2025.1(61).6
- Jun 29, 2025
- State and Regions. Series: Social Communications
<p><strong><em>The purpose</em></strong><em> of this article is to provide a comprehensive analysis of the evolution, current state, and prospects of artificial intelligence (AI) application in advertising and public relations, as well as to evaluate its impact on the industry’s transformation through an examination of key application areas, ethical challenges, and technological trends. </em></p><p><strong><em>The research methodology</em></strong><em>. The study employs a comprehensive approach that integrates an analysis of historical data on the development of AI from mid-20th-century concepts to contemporary systems, a review of current sources and practical examples of AI utilization by leading companies, and an assessment of ethical and social implications based on scientific publications and open data. </em></p><p><strong><em>Results</em></strong><em>. The article investigates the evolution, current state, and future prospects of AI in advertising and public relations. It traces the historical development of AI from mid-20th-century concepts to modern sophisticated systems and analyzes four key areas of effective AI technology application in the contemporary advertising industry: targeting and consumer behavior prediction, content creation, customer interaction, and optimization with analytics. Through specific examples from leading companies (Meta, Lexus, Coca-Cola, JP Morgan Chase, Netflix), the practical use of AI tools to enhance the effectiveness of advertising campaigns is demonstrated. Particular attention is given to the ethical aspects and challenges associated with AI implementation, including data privacy concerns, algorithmic bias, and the «black box» problem. The article outlines prospects for the further development of AI in the advertising sector, emphasizing the importance of maintaining a balance between technological innovation and the human factor to maximize the effectiveness of marketing communications. </em></p><p><strong><em>Novelty</em></strong><em>. The study systematizes the stages of AI evolution in the advertising industry, identifies four key areas of its contemporary application, and offers recommendations for the responsible adoption of these technologies, ethical and social challenges.</em></p><p><strong><em>Practical significance</em></strong><em>. The research highlights practical examples of AI utilization by leading companies, enabling marketers to adapt these approaches to improve campaign efficiency, reduce costs, and enhance audience engagement. It also defines strategies for AI integration that preserve human creativity while addressing ethical concerns such as data protection and algorithmic transparency.</em></p><strong><em>Key words:</em></strong><em> artificial intelligence, advertising, public relations, targeting, personalization, generative AI, AI ethics, advertising automation.</em>
- Research Article
3
- 10.21146/2413-9084-2022-27-2-100-107
- Jan 1, 2022
- Philosophy of Science and Technology
The article considers a qualitatively new stage in the development of artificial intelligence (AI), associated with the development of artificial general intelligence (abbreviated as AGI in the international nomenclature – from Artificial General Intelligence). Unlike traditional AI, AGI is significantly closer in its functions to natural intelligence (EI), it will be able to self-learn, solve a wide range of tasks in different environments, i.e. be integral and autonomous. Such a level of “independence” of AGI opens up fundamentally new prospects for the development of information technologies, but at the same time poses many acute socio-humanitarian problems associated with the risks and threats of losing control over the development of AI. The successful development of AGI requires new theoretical and methodological approaches based on the principles of post-nonclassical epistemology and the results of neuroscientific and phenomenological studies of consciousness. It is very important to consider these issues from the angle of the extreme aggravation of the global crisis of world civilization, due to its consumer dominance and efforts to preserve its monopolar structure from the part of the United States and its Western allies. In this regard, a broader, philosophical-anthropological approach is also required to understand the current state of our civilization and the possibilities for its transformation. It involves taking into account what is called the nature of man, as a stable complex of his mental and bodily properties. They were reproduced among all peoples, in all historical epochs, under all social structures, which indicates their biological conditionality. Among them, along with altruistic properties, a number of negative properties can be distinguished (such as unlimited consumerism, aggressiveness towards one’s own kind, excessive egoistic self-will). These characteristic properties of mass consciousness were actively exploited adherents of monopolarity in their interests. Overcoming the principles and practices of monopolarity and thereby changing the global social self-organization is a necessary condition for a truly humanistic stage of anthropotechnological evolution, capable of opening up new existential prospects for the transformation of man and mankind.
- Research Article
1
- 10.71364/sfcj3f93
- Feb 27, 2025
- Journal of the American Institute
The development of artificial intelligence (AI) has brought about various significant changes in various sectors of life, including industry, education, and health. However, advances in AI also pose moral and ethical challenges, especially related to transparency, fairness, and accountability in their use. In the context of ethical theology, moral responsibility in the development of AI is an important aspect that needs to be considered to ensure that this technology is developed and applied responsibly in accordance with applicable human values and moral principles. This research aims to examine how the principles of ethical theology can provide a normative foundation in the development of more ethical and responsible AI. The method used in this study is a literature study by analyzing various academic sources, including scientific journals, books, and policy documents that discuss the relationship between AI, morality, and ethical theology. The data collected were then analyzed using the qualitative content analysis method to identify the main findings in this study. The results of the study show that the development of ethical AI requires the integration of moral principles such as justice, love, accountability, and respect for human dignity. Additionally, human regulation and oversight remain necessary to ensure that AI is not used in a way that harms certain individuals or groups. Therefore, the ethical theology approach can be one of the solutions in formulating a more equitable and responsible AI policy.
- Research Article
- 10.31891/2307-5732-2024-341-5-74
- Oct 31, 2024
- Herald of Khmelnytskyi National University. Technical sciences
This article is dedicated to identifying the fundamental differences between artificial intelligence and machine learning. The aim of the research is to analyze, systematize, and improve the existing theoretical and methodological framework concerning the functioning of artificial intelligence and machine learning, as well as to define the distinctions between these systems. The article examines the current state and prospects for the development of artificial intelligence and machine learning as integral components of innovative technologies that facilitate the automation of complex processes and enhance efficiency across various fields. The research outlines the main stages in the development of artificial intelligence: weak artificial intelligence, which specializes in performing narrow tasks; strong artificial intelligence, aimed at achieving human cognitive capabilities; and superintelligence, which is expected to surpass human intelligence in many aspects. The study substantiates the role of machine learning as a key tool for implementing artificial intelligence, enabling the creation of systems that can self-learn and adapt to changing conditions without additional programming. The article provides examples of the application of artificial intelligence and machine learning in various sectors such as medicine, finance, cybersecurity, marketing, and transportation, where these technologies contribute to improving diagnostic processes, forecasting market fluctuations, and optimizing decision-making. In particular, the main advantages of machine learning are identified, including adaptability and the ability to make predictions based on large data sets, which enhances the effectiveness of analysis and decision-making in real time. The research into the development of artificial intelligence has revealed the technical and ethical challenges associated with creating strong artificial intelligence and superintelligence, which require the development of appropriate regulatory measures. The article emphasizes the significance of artificial intelligence and machine learning for modern society, their impact on various fields of economics, science, and technology, as well as the necessity for further research to ensure the safe and effective development of these technologies.
- Book Chapter
- 10.1016/b978-0-443-15299-3.00011-7
- Jan 1, 2023
- Accelerating Strategic Changes for Digital Transformation in the Healthcare Industry
Chapter 10 - A systems approach to implementing ethics in a COVID-19 AI application: A qualitative study
- Research Article
2
- 10.1515/omgc-2024-0041
- Jan 30, 2025
- Online Media and Global Communication
Purpose This study analyzes China’s strategic initiatives in metaverse and artificial intelligence (AI) development, examining their impact on academic research, industry innovation, and policy formulation. It aims to understand how government policies and investments have shaped research agendas and to identify challenges and opportunities in these fields. Design/methodology/approach The research employs a comprehensive analysis of government documents, funding schemes, and research output. It examines key policies, investment programs, and academic publications to track trends in metaverse and AI development in China. The study utilizes bibliometric analysis to assess publication trends, citation patterns, and international collaboration networks. Findings China’s proactive approach, characterized by strong government support and significant private sector investment, has led to a substantial increase in research output and quality in metaverse and AI fields. Chinese institutions have become major contributors to global publications, with growing citation rates and presence at international conferences. The research identifies emerging challenges in privacy, ethical AI development, and digital divide concerns. Practical implications The findings provide insights for policymakers, researchers, and industry stakeholders on the development trajectory of metaverse and AI technologies in China. They highlight the need for balanced approaches to innovation, regulation, and ethical considerations in these rapidly evolving fields. Social implications The study underscores the potential of metaverse and AI technologies to transform various sectors of society, from education and healthcare to entertainment and social interactions. It emphasizes the importance of addressing digital equity and ethical AI deployment to ensure broad societal benefits. Originality/value This research offers a comprehensive overview of China’s approach to metaverse and AI development, providing a unique perspective on the interplay between government initiatives, academic research, and industry innovation. It contributes to the broader discussion on the global development of these transformative technologies and their implications for future technological landscapes.
- Research Article
1
- 10.32996/jcsts.2024.6.4.14
- Oct 16, 2024
- Journal of Computer Science and Technology Studies
The present study investigates the potential impact of artificial intelligence (AI) on the future trajectory of human civilization. It focuses on topics such as super-exponential growth, the potential emergence of a galactic civilization, and the associated "doom" hazards. A significant advancement in machine intelligence with human-like consciousness, strong artificial intelligence (AI), also known as artificial general intelligence (AGI), creates new opportunities and capacities. There's growing anxiety about the risk that weak AI will eventually become strong AI. Every year, new transformer models that are more like human interactions are being created, and we have already witnessed some indications of AGI. It is anticipated that AI will reach a "singularity" and advance on its own without assistance from humans. This thesis explores the theoretical and practical foundations, model building blocks, development processes, challenges, and ethical issues surrounding the creation of Consciousness AI (AGI). This paper examines the meaning of the term "technological singularity," the various types of singularities that have no point of return idea, the philosophical risks associated with the development of AI, and the implications of AI singularity for monetary theory and the new economic order. As a new perspective on the deployment of ethical AI in the face of tremendous technological advancements, the study not only contributes to the theoretical discourse but also explores the possible practical implications of AI on our shared future. Several obstacles to AI advancement are covered in the paper, along with prospective directions for future research.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.