Outline for a German Strategy for Artificial Intelligence
Outline for a German Strategy for Artificial Intelligence
- # Artificial Intelligence
- # Artificial Intelligence Strategy
- # Artificial Intelligence Ecosystem
- # Artificial Intelligence Applications
- # Artificial Intelligence Research
- # Comprehensive Strategy
- # Artificial Intelligence Development
- # Powerful Processing Hardware
- # Natural Science Programs
- # Ecosystem Approach
- Research Article
1
- 10.1007/s43681-025-00663-2
- Feb 19, 2025
- AI and Ethics
Over fifty countries have published national infrastructure and strategy plans on Artificial Intelligence (AI), outlining their values and priorities regarding AI research, development, and deployment. This paper utilizes a deliberation and capabilities-based ethics framework rooted in providing freedom of agency and choice to human beings– to investigate how different countries approach AI ethics within their national plans. We explore the commonalities and variations in national priorities and their implications for a deliberation and capabilities-based ethics approach. Combining established and novel methodologies such as content analysis, graph structuring, and generative AI, we uncover a complex landscape where traditional geostrategic formations intersect with new alliances, thereby revealing how various groups and associated values are prioritized. For instance, the Ibero-American AI strategy highlights strong connections among Latin American nations, particularly with Spain, emphasizing gender diversity but pragmatically and predominantly as a workforce issue. In contrast, a US-led coalition of “science and tech first movers" is more focused on advancing foundational AI and diverse applications. The European Union AI strategy showcases leading states like France and Germany while addressing regional divides, with more focus and detail on social mobility, sustainability, standardization, and democratic governance of AI. These findings offer an empirical lens into the current global landscape of AI development and ethics, revealing distinct national trajectories in the pursuit of ethical AI.
- Research Article
81
- 10.1108/ijchm-02-2023-0189
- Aug 11, 2023
- International Journal of Contemporary Hospitality Management
Purpose The purpose of this study is to analyze state-of-the-art knowledge of artificial intelligence (AI) research in hospitality. Design/methodology/approach This study adopts the theory-context-methods framework to systematically review 100 AI-related articles recently published (i.e. from 2021 to April 2023) in three top-tier hospitality journals, namely, the International Journal of Contemporary Hospitality Management, International Journal of Hospitality Management and Journal of Hospitality Marketing and Management. Findings Findings suggest that studies of AI applications in hospitality are mostly theory-driven, whereas most AI methods research adopts a data-driven approach. State-of-the-art AI applications research exhibits the most interest in service robots. In AI methods research, little attention was paid to the amid-service/experience. Research limitations/implications This study reveals inadequacies in theory, context and methods in contemporary AI research. More research from hospitality suppliers’ perspectives and research on generative AI applications are advocated in response to the unveiled research gaps and recent AI developments. Originality/value This study classifies the most recent AI research in hospitality into two main streams – AI applications research and AI methods research – and discusses the gaps in each research stream and latest AI developments. The paper then suggests future research directions to guide researchers in advancing AI research in hospitality.
- Research Article
34
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w
- Research Article
1
- 10.51702/esoguifd.1583408
- May 15, 2025
- Eskişehir Osmangazi Üniversitesi İlahiyat Fakültesi Dergisi
Artificial intelligence is defined as the totality of systems and programs that imitate human intelligence and can eventually surpass this intelligence over time. The rapid development of these technologies has raised various ethical debates such as moral responsibility, privacy, bias, respect for human rights, and social impacts. This study examines the technical infrastructure of artificial intelligence, the differences between weak and strong artificial intelligence, ethical issues, and theological dimensions in detail, providing a comprehensive perspective on the role of artificial intelligence in human life and the problems it brings. The historical development of artificial intelligence has been shaped by the contributions of various disciplines such as mathematical logic, cognitive science, philosophy, and engineering. From the ancient Greek philosophers to the present day, thoughts on artificial intelligence have raised deep philosophical questions such as human nature, consciousness, and responsibility. The algorithms developed by Alan Turing have contributed to the modern shaping of artificial intelligence and have put forward the first models to assess whether machines have human-like intelligence, such as the “Turing Test”. The study first analyzes the technical infrastructure of artificial intelligence in detail and discusses the current limits and potential of the technology through the distinction between weak and strong artificial intelligence. Weak artificial intelligence includes systems designed to perform specific tasks and do not exhibit general intelligence outside of those tasks, while strong artificial intelligence refers to systems with human-like general intelligence and flexible thinking capacity. Most of the widely used artificial intelligence applications today fall into the category of weak artificial intelligence. However, the development of strong artificial intelligence brings various ethical and theological consequences for humanity. The ethical issues of artificial intelligence include fundamental topics such as autonomy, responsibility, transparency, fairness, and privacy. The decision-making processes of autonomous systems raise serious ethical questions at the societal level. Especially autonomous weapons and artificial intelligence-managed justice systems raise concerns in terms of human rights and individual freedoms. In this context, the ethical framework of artificial intelligence has deep impacts on the future of humanity and human-machine interaction, not just limited to technological boundaries. From a theological perspective, the ability of artificial intelligence to imitate the human mind and creative processes raises deep theological issues such as the creativity of God, the place of human beings in the universe, and consciousness. The questions of whether artificial intelligence systems can gain consciousness and whether these conscious systems can have a spiritual status have led to new debates in theology and philosophy. The ethical principles of artificial intelligence are shaped around principles such as transparency, accountability, autonomy, human control, and data management. In conclusion, determining the ethical and theological principles that need to be considered in the development and application of artificial intelligence is critical for the future of humanity. A comprehensive examination of the ethical and theological dimensions of artificial intelligence technologies is necessary to understand and manage the social impacts of this technology. This study emphasizes the necessity of an interdisciplinary approach for the development of artificial intelligence in harmony with social values and for the benefit of humanity. The study provides an important theoretical framework for future research by shedding light on the complex ethical and theological issues arising from the development and widespread use of artificial intelligence.
- Book Chapter
- 10.1007/978-981-13-9390-7_6
- Nov 20, 2019
With the rapid development and application of artificial intelligence (AI), the computer technology has entered the era of new Information Technology (IT) called Intelligent Technology. AI can accelerate the information construction of science and technology. In the past two years, the AI research has been promoted to the level of the national development strategy in China. This chapter explores the origin and development of AI and the AI development in China. AMiner, a big data analysis and service platform for science and technology, is independently developed by China. It is a successful case in the informatization of science and technology in China. Based on the open dataset of AI in AMiner, we give the classification of the AI research in China. We overview the AI research situation in China based on the experts, chapters, and patents analysis. The AI applications, such as speech recognition, face recognition, automatic driving, and so on, are introduced in the chapter. We also discuss the opportunities and challenges of AI in China. In general, this chapter fills the gaps in the authoritative analysis of the AI research situation in China.
- Research Article
192
- 10.1016/j.ijnurstu.2021.104153
- Dec 7, 2021
- International journal of nursing studies
BackgroundResearch on technologies based on artificial intelligence in healthcare has increased during the last decade, with applications showing great potential in assisting and improving care. However, introducing these technologies into nursing can raise concerns related to data bias in the context of training algorithms and potential implications for certain populations. Little evidence exists in the extant literature regarding the efficacious application of many artificial intelligence -based health technologies used in healthcare. ObjectivesTo synthesize currently available state-of the-art research in artificial intelligence -based technologies applied in nursing practice. DesignScoping review MethodsPubMed, CINAHL, Web of Science and IEEE Xplore were searched for relevant articles with queries that combine names and terms related to nursing, artificial intelligence and machine learning methods. Included studies focused on developing or validating artificial intelligence -based technologies with a clear description of their impacts on nursing. We excluded non-experimental studies and research targeted at robotics, nursing management and technologies used in nursing research and education. ResultsA total of 7610 articles published between January 2010 and March 2021 were revealed, with 93 articles included in this review. Most studies explored the technology development (n = 55, 59.1%) and formation (testing) (n = 28, 30.1%) phases, followed by implementation (n = 9, 9.7%) and operational (n = 1, 1.1%) phases. The vast majority (73.1%) of studies provided evidence with a descriptive design (level VI) while only a small portion (4.3%) were randomised controlled trials (level II). The study aims, settings and methods were poorly described in the articles, and discussion of ethical considerations were lacking in 36.6% of studies. Additionally, one-third of papers (33.3%) were reported without the involvement of nurses. ConclusionsContemporary research on applications of artificial intelligence -based technologies in nursing mainly cover the earlier stages of technology development, leaving scarce evidence of the impact of these technologies and implementation aspects into practice. The content of research reported is varied. Therefore, guidelines on research reporting and implementing artificial intelligence -based technologies in nursing are needed. Furthermore, integrating basic knowledge of artificial intelligence -related technologies and their applications in nursing education is imperative, and interventions to increase the inclusion of nurses throughout the technology research and development process is needed.
- Research Article
5
- 10.1057/s41599-024-03289-7
- Jul 2, 2024
- Humanities and Social Sciences Communications
Artificial intelligence (AI) is arguably the most transformative technology of our time. While all nations would like to mobilize their resources to play an active role in AI development and utilization, only a few nations, such as the United States and China, have the resources and capacity to do so. If so, how can smaller or less resourceful countries navigate the technological terrain to emerge at the forefront of AI development? This research presents an in-depth analysis of Singapore’s journey in constructing a robust AI ecosystem amidst the prevailing global dominance of the United States and China. By examining the case of Singapore, we argue that by designing policies that address risks associated with AI development and implementation, smaller countries can create a vibrant AI ecosystem that encourages experimentation and early adoption of the technology. In addition, through Singapore’s case, we demonstrate the active role the government can play, not only as a policymaker but also as a steward to guide the rest of the economy towards the application of AI.
- Research Article
2
- 10.2139/ssrn.3841656
- May 7, 2021
- SSRN Electronic Journal
This research presents a detailed case analysis of BGL Group, a leading, international, distributor of insurance and household financial services. The AI strategy is described by analysing and evaluating a set of AI applications covering a variety of business areas: 1. Machine learning for pricing; 2. Chatbot AI technology to improve the customer experience in e-service; 3. Customer experience design thinking and a/b testing in new product development; 4. Voice recognition and Natural Language Processing (NLP) in call centre operations; 5. AI techniques for market segmentation. Each application is described in detail, and the concept of value creation in service markets is illustrated using data flow diagrams of customer interactions for different stages of the customer journey. A benefits matrix model is proposed that captures the principal AI benefits to both the supplier and the customer. The case discussion uses a new model, an AI systems map, to describe and explain the overall landscape of current AI applications, traditional Management Information Systems (MIS) and possible future application areas based on broad AI strategies and cognitive AI/thinking machines. Some concluding remarks are made on the importance of a digital first culture, up-to-date digital infrastructure and technology partnerships for successful implementation of AI systems, the crucial role of big data in AI strategies, and the growing importance of AI ethics in business applications. Finally, some propositions are offered regarding the future direction of AI in insurance markets.
- Research Article
5
- 10.12968/coan.2022.0028a
- Jun 2, 2023
- Companion Animal
Artificial intelligence is a newer concept in veterinary medicine than human medicine, but its existing benefits illustrate the significant potential it may also have in this field. This article reviews the application of artificial intelligence to various fields of veterinary medicine. Successful integration of different artificial intelligence strategies can offer practical solutions to issues, such as time pressure, in practice. Several databases were searched to identify literature on the application of artificial intelligence in veterinary medicine. Exclusion and inclusion criteria were applied to obtain relevant papers. There was evidence for an acceleration of artificial intelligence research in recent years, particularly for diagnostics and imaging. Some of the benefits of using artificial intelligence included standardisation, increased efficiency, and a reduction in the need for expertise in particular fields. However, limitations identified in the literature included a requirement for ideal situations for artificial intelligence to achieve accuracy and other inherent, unresolved issues. Ethical considerations and a hesitancy to engage with artificial intelligence, by both the public and veterinarians, are further barriers that must be addressed for artificial intelligence to be fully integrated in daily practice. The rapid growth in artificial intelligence research substantiates its potential to improve veterinary practice.
- Research Article
5
- 10.3390/technologies13020051
- Jan 30, 2025
- Technologies
This comprehensive survey explored the evolving landscape of generative Artificial Intelligence (AI), with a specific focus on the recent technological breakthroughs and the gathering advancements toward possible Artificial General Intelligence (AGI). It critically examined the current state and future trajectory of generative AI, exploring how innovations in developing actionable and multimodal AI agents with the ability scale their “thinking” in solving complex reasoning tasks are reshaping research priorities and applications across various domains, while the survey also offers an impact analysis on the generative AI research taxonomy. This work has assessed the computational challenges, scalability, and real-world implications of these technologies while highlighting their potential in driving significant progress in fields like healthcare, finance, and education. Our study also addressed the emerging academic challenges posed by the proliferation of both AI-themed and AI-generated preprints, examining their impact on the peer-review process and scholarly communication. The study highlighted the importance of incorporating ethical and human-centric methods in AI development, ensuring alignment with societal norms and welfare, and outlined a strategy for future AI research that focuses on a balanced and conscientious use of generative AI as its capabilities continue to scale.
- Research Article
20
- 10.1108/fs-06-2022-0069
- Mar 14, 2023
- foresight
PurposeThis study aims to predict artificial intelligence (AI) technology development and the impact of AI utilization activity on companies, to identify AI strategies dealing with the broad innovation activity of AI, and to construct the strategic decision-making framework of AI strategies for a small- and medium-sized enterprise (hereafter SME), to improve strategic decision-making practices of AI strategy in SMEs.Design/methodology/approachThis study used the multiple methods on the design of two data collection stages. The first stage is an expertise-based approach. It organized the three groups of expert panels and conducted the Delphi survey on them in combination with the brainstorming of technology, innovation and strategy in the fourth industrial revolution. The second stage is in the complement approach of expertise-based results. It used the literature review to involve the analysis of academic and practical papers, reports and audio materials relating to technology development, innovation types and strategies of AI. Additionally, it organized the four semi-structured interviews. Finally, this study used the mind-map and decision tree to conduct each analysis and synthesize each analytical result.FindingsThis study identifies the precondition and four paths of AI technological development classifying into specialized AI, AI convergence with other technologies, general AI and AI control methods. It captures the impact of non- and technological innovation through AI on companies. Second, it identifies and classifies the six types of AI strategy: the bystander, capability-building, capability-holding, management-enhancing, market-enhancing and new-market-creating strategy. By using the decision tree, it constructs the strategic decision-making framework containing six AI strategies. Actionable points, strategic priorities and relevant instruments are suggested.Research limitations/implicationsThe strategic decision-making framework covering from AI technology development to utilization in a SME can help understand the strategic behaviours in SMEs. The typology of six AI strategies implies the broad innovation behaviours in SMEs. It can lead to further research to understand the pattern of strategic and innovation behaviour on AI.Practical implicationsThis practical study can help executives, managers and engineers in SMEs to develop their strategic practices through the strategic decision framework and six AI strategies.Originality/valueThis practical study elicits the six types of AI strategy and constructs the strategic decision-making framework of six AI strategies from AI technology development to utilization. It can contribute to improving the practices of strategic decision-making in SMEs.
- Discussion
46
- 10.1016/s2589-7500(22)00032-2
- Mar 22, 2022
- The Lancet Digital Health
An interactive dashboard to track themes, development maturity, and global equity in clinical artificial intelligence research
- Research Article
14
- 10.59022/ijlp.27
- Mar 1, 2023
- International Journal of Law and Policy
Artificial intelligence strategies refer to the plans and actions taken by governments to develop and apply AI technologies to achieve specific goals. This article explores Uzbekistan's policies and preferences regarding the development and implementation of artificial intelligence (AI) technologies. The study examines the country's national strategies and regulatory frameworks for AI, as well as the challenges it faces in realizing its AI ambitions. The analysis reveals that Uzbekistan sees AI as a key enabler of economic growth, social development, and modernization, and aims to become a regional leader in AI by 2030. To achieve this goal, the government has launched several initiatives, such as establishing AI research centers, promoting entrepreneurship and innovation, and investing in digital infrastructure. However, the article also identifies several obstacles, such as a lack of skilled workforce, limited funding, and ethical and legal concerns. The study concludes by providing recommendations for how Uzbekistan can address these challenges and strengthen its AI ecosystem.
- Discussion
6
- 10.1016/j.ebiom.2023.104672
- Jul 1, 2023
- eBioMedicine
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
- Research Article
5
- 10.2139/ssrn.3880779
- Jan 1, 2021
- SSRN Electronic Journal
The rapid spread of artificial intelligence (AI) systems has precipitated a rise in ethical and rights-based frameworks intended to guide the development and use of these technologies. Despite the proliferation of these principles, there is mounting public concern over the influence that the AI systems have in our society, and coalitions in all sectors are organizing to resist harmful applications of AI worldwide. Responses from peoples everywhere, from workers protesting unethical conduct and applications of AI, to student's protesting MIT's relationships with donor, sex trafficker, and pedophile Jeffery Epstein, to the healthcare community, to indigenous people addressing “the twin problems of a lack of reliable data and information on indigenous peoples and biopiracy and misuse of their traditional knowledge and cultural heritage”, to smart city stakeholders, to many others. Like corporations, governments around the world have adopted strategies for becoming leaders in the development and use of Artificial Intelligence, fostering environments congenial to AI innovators. Neither corporations nor policymakers have sufficiently addressed how the rights of children fit into their AI strategies or products. The role of artificial intelligence in children’s lives—from how children play, to how they are educated, to how they consume information and learn about the world—is expected to increase exponentially over the coming years. Thus, it’s imperative that stakeholders evaluate the risks and assess opportunities to use artificial intelligence to maximize children’s wellbeing in a thoughtful and systematic manner. This paper discusses AI and children's rights in the context of social media platforms such as YouTube, smart toys, and AI education applications. The Hello Barbie, Cloud Pets, and Cayla smart toys case studies are analyzed, as well as the ElsaGate social media hacks and education's new Intelligent Tutoring Systems and surveillance of students apps. Though AI has valuable benefits for children, it presents some particular challenges around important issues including child safety, privacy, data privacy, device security and consent. Technology giants, all of whom are heavily investing in and profiting from AI, must not dominate the public discourse on responsible use of AI. We all need to shape the future of our core values and democratic institutions. As artificial intelligence continues to find its way into our daily lives, its propensity to interfere with our rights only gets more severe. Many of the issues mentioned in this examination of harmful AI are not new, but they are greatly exacerbated and threatened by the scale, proliferation, and real-life impact that artificial intelligence facilitates. The potential of artificial intelligence to both help and harm people is much greater than earlier technologies. Continuing to examine what safeguards and structures can address AI’s problems and harms, including those that disproportionately impact marginalized people, is a critical activity. There are assumptions embedded in the AI algorithms that will shape how our world is realized. Many of these algorithms are wrongful and biased, they must get locked-in. Our best human judgment is needed to contain AI's harmful impacts. Perhaps one of the greatest contributions of AI will be to make us ultimately understand how important human wisdom truly is in life on earth.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.