Ethical AI governance: AI for society and a co-learning approach
Artificial intelligence (AI) presents transformative opportunities and complex ethical challenges. This paper adopts a socio-technical perspective, emphasizing that AI is not an isolated technology but rather deeply embedded in evolving societies. It critiques governance models, particularly rule-based approaches in the West, which, whilst addressing some risks, often stifle innovation and fail to engage diverse societal needs. This paper proposes an alternative framework integrating Western risk-management strategies with Chinese ethical principles rooted in Confucianism and Daoism. These principles emphasize dynamics, flexibility, relational stakeholder participation, and context-sensitive solutions to align AI with societal and environmental goals. The proposed model advocates for a co-learning approach to AI ethics, recognizing the dynamic interactions among developers, users, policymakers, and the public. By fostering participatory governance and adaptive ethical frameworks, it addresses both known and unknown risks while promoting equitable, sustainable development. It calls for cooperation to harness AI's transformative potential, ensuring it evolves in ways that benefit society and mitigate harm.
107
- 10.1186/s40504-014-0017-4
- Nov 6, 2014
- Life Sciences, Society and Policy
2
- 10.1126/science.adr6713
- Aug 23, 2024
- Science (New York, N.Y.)
10
- 10.1080/00220380903151033
- Jul 1, 2010
- The Journal of Development Studies
8005
- 10.2307/j.ctv12101zq
- Jul 11, 2007
- Research Article
- 10.63075/20x6kb11
- Jun 28, 2025
- Advance Journal of Econometrics and Finance
This literature review explores the transformative potential of Artificial Intelligence (AI) in advancing sustainable development, highlighting its applications across sectors such as finance, construction, healthcare, and cultural heritage. AI’s capabilities in data processing, automation, and decision-making enable resource optimization and support progress toward the Sustainable Development Goals (SDGs). However, a major concern is the “principles-to-practices gap,” wherein high-level ethical AI frameworks lack clear implementation mechanisms, especially in low-resource or marginalized contexts. The review synthesizes global case studies, including AI deployment in mountain communities and cultural institutions, to demonstrate the value of context-sensitive, human-centric design. These examples reveal how AI can bridge digital divides and empower underrepresented groups when developed inclusively. However, risks of “AI neo-colonialism” persist, as governance models from high-income countries may marginalize diverse development needs. The review identifies shared themes such as data centrality, ethical design, and alignment with SDGs, while highlighting disparities in resources, governance models, and goals across organizations. It underscores the need for adaptive, inclusive AI governance frameworks that balance innovation with accountability. Policy implications include the need for enforceable, risk-based AI regulations, international cooperation for harmonized standards, and investment in explainable AI and infrastructure sustainability. Future research should prioritize empirical studies on governance practices, particularly in the Global South, and develop sector-specific tools to map AI’s contributions to sustainability. Ultimately, responsible AI governance must integrate social, cultural, and political dimensions to ensure that AI supports not just innovation, but equitable, inclusive, and sustainable global development. Keywords: Artificial Intelligence, Sustainable Development, Ethical Governance, SDGs
- Research Article
- 10.52783/jisem.v10i30s.4775
- Mar 31, 2025
- Journal of Information Systems Engineering and Management
In an era marked by rapid technological advancement, the convergence of corporate governance and artificial intelligence (AI) ethics has emerged as a pivotal concern for modern businesses. As AI technologies become deeply embedded in decision-making processes, the risks of ethical violations, bias, lack of transparency, and accountability have intensified. While AI promises significant improvements in efficiency, innovation, and strategic agility, these benefits can only be realized within a robust ethical and governance framework. This paper explores how corporate governance can be strategically aligned with AI ethics to promote responsible innovation and uphold societal trust. The research delves into the intersection of governance structures, ethical principles, and AI applications to propose a comprehensive strategic framework that ensures ethical decision-making in business. It examines the roles and responsibilities of boards of directors, executive leadership, and key stakeholders in fostering an ethical AI culture. Emphasis is placed on principles such as transparency, accountability, fairness, privacy, and inclusivity, all of which are essential to maintain corporate integrity and public confidence. This study employs a multidisciplinary approach, integrating insights from corporate governance theory, ethical philosophy, AI regulatory policies, and business case studies. The paper also investigates existing challenges businesses face in implementing AI ethically, including regulatory ambiguity, insufficient oversight mechanisms, and potential conflicts of interest. Ultimately, this paper offers strategic recommendations for integrating AI ethics into corporate governance frameworks, including the adoption of ethical guidelines, establishment of AI ethics committees, continuous training for employees, and fostering stakeholder engagement. These measures aim to ensure that organizations not only comply with legal standards but also go beyond compliance to embrace ethical leadership in the age of AI. By presenting a robust strategic framework, this research contributes to ongoing discussions on ethical AI and responsible corporate governance, encouraging businesses to adopt proactive, transparent, and inclusive strategies that align technological innovation with societal values.
- Book Chapter
4
- 10.62311/nesx/97832
- Mar 25, 2024
Abstract: In an era where artificial intelligence (AI) significantly impacts societal norms, economic structures, and individual rights, establishing a framework for ethical AI governance emerges as a paramount concern. "Ethical AI Governance: A Global Blueprint" provides a comprehensive exploration of the principles, challenges, and strategies necessary for implementing ethical governance of AI technologies across the globe. This chapter delves into the universally recognized principles of fairness, accountability, transparency, and privacy, examining their application in diverse cultural and legal contexts. It highlights the global challenges and opportunities in harmonizing ethical AI governance, emphasizing the need for international cooperation, stakeholder engagement, and the development of adaptable regulatory mechanisms. Through a review of existing frameworks and models, along with case studies of successful implementations, the chapter offers a detailed blueprint for ethical AI governance. This blueprint advocates for collaborative platforms, educational initiatives, and robust monitoring and evaluation mechanisms to ensure AI's ethical development and deployment. As AI continues to evolve, this chapter serves as a foundational guide for stakeholders worldwide to navigate the complex landscape of ethical AI governance, fostering innovation that aligns with human values and societal norms. Keywords/Index Terms: Artificial Intelligence (AI),Ethical Governance,Global Blueprint,Fairness,Accountability,Transparency,Privacy,International Cooperation,Regulatory Mechanisms,Stakeholder Engagement,Education and Awareness,Monitoring and Evaluation,Cultural Diversity,Legal Variances,Economic Disparities,Frameworks and Models,Case Studies and Innovation Ecosystems.
- Research Article
- 10.62823/ijgrit/03.2(ii).7624
- Jun 24, 2025
- International Journal of Global Research Innovations & Technology
Background: Artificial intelligence (AI) and automation are increasingly influencing workplace decision-making, particularly in recruitment, performance evaluations, and career progression. While AI is often perceived as neutral, research highlights that these systems frequently replicate and amplify historical gender biases, disproportionately disadvantaging women and marginalized groups. Existing AI fairness models primarily focus on generic algorithmic bias but fail to address gender-specific and intersectional discrimination. Additionally, corporate AI governance frameworks lack structured enforcement mechanisms, leading to reactive rather than proactive bias mitigation. Objective: This study aims to develop a structured framework for mitigating gender bias in AI-driven workplace automation. It seeks to bridge the gap between AI development and ethical workforce practices by integrating fairness, accountability, and inclusivity into algorithmic decision-making. Methodology: A conceptual research design is adopted, synthesizing insights from AI fairness literature, gender studies, and corporate governance frameworks. The study relies on secondary data sources, including peer-reviewed journal articles, industry reports, and case studies on AI-driven workplace discrimination. Theoretical models such as Gender Role Theory, Algorithmic Bias Theory, and Intersectionality Theory inform the framework’s development. Proposed Model: The study introduces the G.E.N.D.E.R. AI Framework as a structured approach to mitigating gender bias in AI-driven workplace automation. This framework integrates six core components to ensure fairness, accountability, and inclusivity in algorithmic decision-making. Governance and regulation serve as the foundation, establishing AI fairness policies and ensuring compliance with ethical and legal standards. Equitable data training addresses biases embedded in historical datasets by implementing strategies to eliminate discriminatory patterns and promote balanced representation. Neutrality in algorithm design emphasizes fairness-aware programming and model transparency, ensuring that AI-driven systems do not reinforce systemic inequalities. Diversity in AI development teams plays a crucial role in reducing bias by incorporating inclusive perspectives in the design and deployment of AI technologies. Evaluation and bias audits enable continuous monitoring of AI-driven decisions, facilitating early detection and correction of discriminatory patterns in hiring, performance assessments, and career progression. Lastly, responsible AI usage mandates human oversight in AI-powered employment decisions, ensuring that algorithmic recommendations are critically reviewed and do not replace human judgment in critical workplace determinations. By integrating these principles, the G.E.N.D.E.R. AI Framework provides a comprehensive, interdisciplinary model designed to promote gender-equitable AI governance and ethical automation in workforce management. Results: The framework provides a structured, interdisciplinary approach to embedding gender equity into AI decision-making. It highlights key challenges in existing AI fairness models and offers actionable solutions for AI developers, HR professionals, and policymakers. Conclusion: As AI continues to shape workforce dynamics, it is critical to ensure that automation fosters inclusivity rather than reinforcing historical inequalities. The G.E.N.D.E.R. AI Framework serves as a foundation for ethical AI governance, promoting gender fairness in workplace automation. Future research should focus on empirical validation, industry-specific adaptations, and the integration of explainable AI techniques to enhance fairness in AI-driven employment decisions.
- Single Book
- 10.62311/nesx/97891
- Mar 14, 2025
Abstract: As Artificial Intelligence (AI) advances, so do the risks associated with deepfakes, misinformation, and algorithmic bias, posing significant threats to security, privacy, democracy, and societal trust. "Securing AI: Combating Deepfakes, Misinformation, and Bias with Trustworthy Systems" provides a comprehensive analysis of AI security vulnerabilities, adversarial machine learning, AI-driven misinformation, and bias in automated decision-making. The book explores how AI-generated synthetic media, data poisoning attacks, and biased algorithms are being weaponized for cyber fraud, political manipulation, and unethical automation. It delves into defensive strategies, AI forensic tools, cryptographic AI verification, and fairness-aware machine learning techniques to combat these emerging threats. Additionally, the book examines global AI regulations, governance frameworks, and ethical deployment standards that ensure transparency, accountability, and security in AI-driven ecosystems. Through real-world case studies, technical insights, and policy recommendations, this book serves as an essential resource for AI researchers, cybersecurity professionals, policymakers, and technology leaders aiming to develop trustworthy AI systems that resist adversarial manipulation, misinformation campaigns, and algorithmic bias while fostering fair, transparent, and secure AI adoption. Keywords: AI security, adversarial machine learning, deepfake detection, AI-generated misinformation, synthetic media, bias mitigation, AI ethics, AI governance, trustworthy AI, explainable AI (XAI), fairness-aware machine learning, cryptographic AI, federated learning security, digital forensics, algorithmic bias, data poisoning attacks, model robustness, cybersecurity in AI, misinformation detection, deep learning security, AI regulatory policies, zero-trust AI, blockchain-based content verification, ethical AI deployment, secure AI frameworks, AI transparency, AI-driven cyber threats, fake news detection, AI fraud prevention.
- Research Article
- 10.62311/nesx/rphcr17
- May 30, 2025
- International Journal of Academic and Industrial Research Innovations(IJAIRI)
Abstract: As artificial intelligence (AI) systems become integral to public sector operations—ranging from predictive analytics in welfare programs to algorithmic decision-making in law enforcement—concerns over transparency, fairness, and accountability have intensified. This study investigates the development of governance frameworks that ensure ethical AI deployment in public service delivery. Drawing from a mixed-methods approach, we conduct a comparative case analysis of AI initiatives across six countries (Canada, Estonia, UK, India, Brazil, and the USA) and analyze empirical survey data from 124 public officials and AI practitioners. Using descriptive statistics, multiple regression, and exploratory factor analysis, we identify four foundational pillars of ethical AI governance: legal-policy alignment, ethical design principles, technical auditability, and multi-stakeholder engagement. The findings reveal that governance structures emphasizing transparency and accountability significantly enhance public trust and reduce algorithmic risk. Notably, participatory models with continuous oversight mechanisms outperform top-down regulatory schemes in fostering ethical compliance. This research contributes to the discourse on responsible AI by offering a validated governance framework tailored to the unique demands of the public sector. Our results have practical implications for policymakers, technologists, and civil society actors aiming to embed ethical safeguards into the architecture of AI systems. Keywords: Ethical AI, Public Sector AI, Governance Frameworks, Algorithmic Accountability, Transparency, Legal-Policy Alignment, Multi-Stakeholder Engagement, AI Regulation, Responsible Innovation, AI Ethics Compliance
- Research Article
1
- 10.69554/yqzc2796
- Dec 1, 2023
- Journal of Digital Banking
This paper seeks to advance the development of responsible artificial intelligence (AI) practices within the banking industry through an in-depth case study analysis of De Volksbank’s governance framework for AI ethics. While the financial sector has historically embraced technological innovation, the increasing integration of AI into customer - and operations-focused systems has raised ethical concerns. Despite regulatory initiatives and the broad availability of ethical frameworks for responsible AI, numerous challenges related to operationalising these frameworks persist. Addressing the existing gap between ethical principles and AI practices in finance, this paper examines De Volksbank’s ethical governance framework as a potential organisational configuration of structures and processes necessary to meaningfully operationalise AI ethics. The paper provides a comprehensive overview of De Volksbank’s governance framework, detailing its requirements, roles and responsibilities across various dimensions of AI ethics governance. Additionally, it elaborates on four important insights of operationalising ethics, focusing on AI ethics as a challenge in (1) organisational design, (2) interdisciplinary expertise and responsibilities, (3) proactive governance and (4) high-quality processes for ethical inquiry. Given the indispensability of trust for the financial sector, trustworthy AI is of crucial importance for its long-term legitimacy and existence. Consequently, this paper seeks to enhance the understanding of operationalising AI ethics in banking and to serve as an impetus for other organisations aspiring to establish ethical frameworks for their AI systems.
- Book Chapter
- 10.1108/978-1-83662-570-420251009
- Dec 1, 2025
Even if there are many drivers for widespread of artificial intelligence (AI) in the architecture-engineering-construction industry (AECI), there are many challenges. These challenges need to be addressed for humanity-centred Society 5.0 and Industry 5.0 as the AI’s widespread and use in the AECI need to be human-friendly contributing to all pillars of sustainability and humanity’s sustainability, welfare, and well-being. Unless companies in AECI invest in AI to ensure its use in the AECI for the humanity’s goodness and sustainability, the meaning of and need for AI in the AECI can disappear hindering sustainability and sustainable development. Based on an in-depth literature review, this chapter aims to examine trade-off and potential opportunity cost of the AI in the AECI from the sustainability and sustainable development perspectives. This chapter highlights the need for integrated thinking of and relationship among AI, sustainability and sustainable development in the AECI. It emphasizes that use of AI in AECI has not only the sustainability pillar-based effects on sustainability but also on their interactions. As challenges related to the use of AI can affect outputs of trade-off and opportunity cost assessments of companies in AECI, challenges need to be addressed strategically and effectively at the strategic management level of the AECI companies and in countries’ sustainable development plans covering and enhancing their impacts on sustainable development, and sustainability. Furthermore, this chapter highlights importance of and need for companies in the AECI to perform trade-off and opportunity cost assessments of their AI investments effectively and strategically considering these investments’ impacts to sustainability and sustainable development. As AECI companies’ trade-off and opportunity cost assessments related with their investment in AI can influence competition in AECI and its impacts to sustainability and sustainable development, these assessments need to be performed strategically covering aspects related with the levels of AECI companies, AECI, country and humanity in a way to support sustainable development and United Nations Sustainable Development Goals. This chapter can be beneficial for all stakeholders of AI, Society 5.0, and Industry 5.0.
- Research Article
- 10.32755/sjeducation.2025.01.183
- Apr 4, 2025
- Scientific Herald of Sivershchyna. Series: Education. Social and Behavioural Sciences
Abstract. The aim of the article is to explore the possibilities and challenges of integrating artificial intelligence (AI) into teaching English (TE). This aim provides for solving the following tasks of research: evaluating the effectiveness of AI in improving students’ learning outcomes, the potential for personalized learning, the automation of feedback, and the impact of AI on the role of the teacher; investigating the potential issues that arise in the context of AI integration, including data privacy concerns, access to technology, and the need for professional development. Methodology. Achieving the goal and solving the tasks have been enhanced by using such methods of research as qualitative and quantitative analyses, case studies, survey research (the survey questions were designed to gather both quantitative data (such as the frequency of AI tool usage) and qualitative feedback (such as perceived effectiveness and ease of use), content analysis (reviewing learning platforms, mobile apps, and AI-powered language assistants to assess their educational content, usability, and effectiveness in fostering language skills), and comparative analysis. Scientific novelty. It is reflected in its multi-dimensional approach to AI integration in teaching English, focusing on a specialized, non-traditional educational institution − the PAU. By exploring AI’s potential in such a unique context, the research offers new insights into how AI can be applied to meet the specific professional language needs of cadets preparing for careers in the penitentiary system. This study expands the body of knowledge regarding AI in education and provides a foundation for future research and practical implementation of AI tools in specialized educational settings. Research results. The article examines the integration of artificial intelligence (AI) in education, specifically focusing on TE and its implications for academic integrity. It highlights AI tools’ transformative potential, such as intelligent tutoring systems, automated feedback mechanisms, and conversational agents in enhancing personalized learning, engagement, and skill development. The research provides insights into how AI supports language proficiency improvement while addressing challenges like ethical concerns, teacher training, and technical limitations. Using the PAU as a case study, the article explores the unique applications of AI in specialized educational settings, emphasizing the importance of ethical AI use and ongoing research. Practical implications. Integrating artificial intelligence (AI) in TE opens new possibilities for personalized learning, automated feedback, and improving language skills. However, the research also emphasizes the importance of maintaining academic integrity, particularly in the context of potential misuse of AI tools. Using tools such as Grammarly, Duolingo, and ChatGPT significantly enhances grammar, vocabulary, pronunciation, and writing proficiency. Interactive platforms adapt to individual learners’ needs, providing a flexible and effective learning environment. AI complements, but does not replace, the role of the teacher. Educators remain crucial in motivating students, offering emotional support, and addressing complex issues beyond AI’s capabilities. Successful AI integration requires teacher training to use these technologies effectively. Technical difficulties, limited technological access, and AI’s inability to fully understand cultural context need attention. Ethical concerns, such as data privacy and preventing algorithmic biases, remain important issues. Value (originality). The value of the study is characterized by the presentation of evaluating the effectiveness of AI in improving students’ learning outcomes, the potential for personalized learning of English, the automation of feedback, and the impact of AI on the role of the teacher; investigating the potential issues that arise in the context of AI integration, including data privacy concerns, access to technology, and the need for professional development for teachers. Key words: AI in education, teaching English (TE), academic integrity, personalized learning, intelligent tutoring systems, automated feedback, ethical AI use.
- Research Article
- 10.12688/openreseurope.20023.1
- Apr 28, 2025
- Open Research Europe
Individuals are increasingly integrating Artificial Intelligence (AI) into their lives, adopting various use cases in healthcare, education, urban mobility, and more. AI has the potential to enhance efficiency, well-being, and societal progress, but it also has negative potential associated with ethical challenges, privacy concerns, and social inequality. A significant research gap remains in understanding the impacts of AI use cases adopted by people on SDG achievement. This study addresses that gap through a systematic analysis of whether AI adoption by people supports or hinders progress toward the SDGs. Using the PRISMA framework, we conducted a systematic review of 131 studies. The results show that the overall impact of AI use cases adopted by individuals on the SDGs is moderately positive. These use cases significantly contribute to areas such as healthcare, innovation, and sustainable urban development, yet their effects remain complex and context dependent. While individually adopted AI fosters efficiency and well-being in many domains, concerns about job displacement, biased decision-making, and misinformation highlight the need for responsible deployment. The study emphasizes the importance of ethical AI governance, equitable access, and AI literacy to ensure its positive contribution to sustainable development. Future research should not only empirically evaluate the real-world impacts of AI applications adopted by people from a sustainability perspective but also explore and develop strategies to mitigate negative impacts on progress toward the SDGs while maximizing their positive contributions. This research contributes to the evolving discourse on AI adoption by people and its implications for sustainable development.
- Research Article
- 10.52783/jisem.v10i35s.6285
- Apr 13, 2025
- Journal of Information Systems Engineering and Management
The rise of Artificial Intelligence (AI) in business operations has redefined traditional models of corporate governance. As AI systems increasingly participate in critical decision-making processes, the need for robust ethical oversight and accountability frameworks has become paramount. This paper explores the intersections of AI technologies and corporate governance principles, aiming to highlight both the opportunities and challenges that arise in this evolving digital landscape. We delve into how AI can enhance transparency, streamline regulatory compliance, and improve decision accuracy, while simultaneously posing ethical concerns related to bias, accountability, data privacy, and control. The paper also evaluates existing frameworks for ethical AI governance, such as OECD Principles on AI, EU’s AI Act, and ISO standards, drawing comparisons with corporate governance standards like the OECD Corporate Governance Principles and national codes. Through comprehensive analysis and data-driven insights, we propose a dynamic governance model that integrates ethical AI practices within the corporate governance structure. Graphs, tables, and a conceptual diagram illustrate the maturity stages of AI integration in governance systems, stakeholder accountability models, and risk-management frameworks. This review contributes to the growing discourse on AI governance by offering strategic recommendations and emphasizing the role of board leadership, interdisciplinary ethics committees, and regulatory collaboration.
- Research Article
16
- 10.3390/jcm14051605
- Feb 27, 2025
- Journal of clinical medicine
Background/Objectives: Artificial intelligence (AI) is transforming healthcare, enabling advances in diagnostics, treatment optimization, and patient care. Yet, its integration raises ethical, regulatory, and societal challenges. Key concerns include data privacy risks, algorithmic bias, and regulatory gaps that struggle to keep pace with AI advancements. This study aims to synthesize a multidisciplinary framework for trustworthy AI in healthcare, focusing on transparency, accountability, fairness, sustainability, and global collaboration. It moves beyond high-level ethical discussions to provide actionable strategies for implementing trustworthy AI in clinical contexts. Methods: A structured literature review was conducted using PubMed, Scopus, and Web of Science. Studies were selected based on relevance to AI ethics, governance, and policy in healthcare, prioritizing peer-reviewed articles, policy analyses, case studies, and ethical guidelines from authoritative sources published within the last decade. The conceptual approach integrates perspectives from clinicians, ethicists, policymakers, and technologists, offering a holistic "ecosystem" view of AI. No clinical trials or patient-level interventions were conducted. Results: The analysis identifies key gaps in current AI governance and introduces the Regulatory Genome-an adaptive AI oversight framework aligned with global policy trends and Sustainable Development Goals. It introduces quantifiable trustworthiness metrics, a comparative analysis of AI categories for clinical applications, and bias mitigation strategies. Additionally, it presents interdisciplinary policy recommendations for aligning AI deployment with ethical, regulatory, and environmental sustainability goals. This study emphasizes measurable standards, multi-stakeholder engagement strategies, and global partnerships to ensure that future AI innovations meet ethical and practical healthcare needs. Conclusions: Trustworthy AI in healthcare requires more than technical advancements-it demands robust ethical safeguards, proactive regulation, and continuous collaboration. By adopting the recommended roadmap, stakeholders can foster responsible innovation, improve patient outcomes, and maintain public trust in AI-driven healthcare.
- Research Article
- 10.9790/0661-2605031925
- Oct 1, 2024
- IOSR Journal of Computer Engineering
Africa as a continent recognizes the need for Artificial Intelligence (AI) governance frameworks. However, despite initiatives like the African Union's Continental AI Strategy, African nations grappled with challenges in formulating comprehensive AI policies, while established global frameworks provided advanced benchmarks for comparison. To address this disparity, this study conducted a comparative analysis of AI governance in Africa relative to global standards and practices. Employing a comprehensive document analysis methodology, the research examined key policy documents, strategic frameworks, and regulatory guidelines across African, European, American, and Asian contexts. The findings revealed that while the African Union demonstrated commitment to coordinated AI governance, African approaches generally lagged behind global benchmarks in comprehensiveness, formalization, and ethical integration. The study identified a notable fragmentation in AI governance across African nations, contrasting with more unified approaches in other regions. African frameworks emphasized leveraging AI for socio-economic development, diverging from the risk mitigation focus seen in EU regulations. The integration of indigenous African ethical perspectives in AI governance frameworks was limited, presenting both challenges and opportunities. Significant disparities in digital infrastructure and AI capacity between Africa and other regions were found to affect governance implementation. The study concluded that despite these challenges, there was potential for Africa to develop innovative, context-specific AI governance models that could contribute valuable insights to the global discourse on responsible AI development. Recommendations included accelerating the implementation of the Continental AI Strategy, prioritizing investment in digital infrastructure, developing Africa-centric AI ethics frameworks, establishing mechanisms for regular benchmarking against global standards, fostering increased collaboration, and implementing AI literacy programs across the continent.
- Research Article
- 10.1152/advan.00119.2025
- Dec 1, 2025
- Advances in physiology education
As artificial intelligence (AI) is becoming more integrated into the field of healthcare, medical students need to learn foundational AI literacy. Yet, traditional, descriptive teaching methods of AI topics are often ineffective in engaging the learners. This article introduces a new application of cinema to teaching AI concepts in medical education. With meticulously chosen movie clips from "Enthiran (Tamil)/Robot (Hindi)/Robo (Telugu)" movie, the students were introduced to the primary differences between artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super intelligence (ASI). This method triggered encouraging responses from students, with learners indicating greater conceptual clarity and heightened interest. Film as an emotive and visual medium not only makes difficult concepts easy to understand but also encourages curiosity, ethical consideration, and higher order thought. This pedagogic intervention demonstrates how narrative-based learning can make abstract AI systems more relatable and clinically relevant for future physicians. Beyond technical content, the method can offer opportunities to cultivate critical engagement with ethical and practical dimensions of AI in healthcare. Integrating film into AI instruction could bridge the gap between theoretical knowledge and clinical application, offering a compelling pathway to enrich medical education in a rapidly evolving digital age.NEW & NOTEWORTHY This article introduces a new learning strategy that employs film to instruct artificial intelligence (AI) principles in medical education. By introducing clips the from "Enthiran (Tamil)/Robot (Hindi)/Robo (Telugu)" movie to clarify artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super intelligence (ASI), the approach converted passive learning into an emotionally evocative and intellectually stimulating experience. Students experienced enhanced comprehension and increased interest in artificial intelligence. This narrative-driven, visually oriented process promises to incorporate technical and ethical AI literacy into medical curricula with enduring relevance and impact.
- Discussion
- 10.2147/jmdh.s541271
- Sep 1, 2025
- Journal of Multidisciplinary Healthcare
The application of generative artificial intelligence (AI) technology in the healthcare sector can significantly enhance the efficiency of China’s healthcare services. However, risks persist in terms of accuracy, transparency, data privacy, ethics, and bias. These risks are manifested in three key areas: first, the potential erosion of human agency; second, issues of fairness and justice; and third, questions of liability and responsibility. This study reviews and analyzes the legal and regulatory frameworks established in China for the application of generative AI in healthcare, as well as relevant academic literature. Our research findings indicate that while China is actively constructing an ethical and legal governance framework in this field, the regulatory system remains inadequate and faces numerous challenges. These challenges include lagging regulatory rules; an unclear legal status of AI in laws such as the Civil Code; immature standards and regulatory schemes for medical AI training data; and the lack of a coordinated regulatory mechanism among different government departments. In response, this study attempts to establish a governance framework for generative AI in the medical field in China from both legal and ethical perspectives, yielding relevant research findings. Given the latest developments in generative AI in China, it is necessary to address the challenges of its application in the medical field from both ethical and legal perspectives. This includes enhancing algorithm transparency, standardizing medical data management, and promoting AI legislation. As AI technology continues to evolve, more diverse technical models will emerge in the future. This study also proposes that to address potential risks associated with medical AI, efforts should be made to establish a global AI ethics review committee to promote the formation of internationally unified ethical and legal review mechanisms.
- New
- Research Article
- 10.1177/20966083251383185
- Nov 5, 2025
- Cultures of Science
- Research Article
- 10.1177/20966083251387283
- Oct 27, 2025
- Cultures of Science
- Research Article
- 10.1177/20966083251370780
- Sep 3, 2025
- Cultures of Science
- Research Article
- 10.1177/20966083251356764
- Jul 13, 2025
- Cultures of Science
- Research Article
- 10.1177/20966083251344406
- May 22, 2025
- Cultures of Science
- Research Article
- 10.1177/20966083251336553
- May 7, 2025
- Cultures of Science
- Research Article
- 10.1177/20966083251333890
- Apr 28, 2025
- Cultures of Science
- Research Article
- 10.1177/20966083251330657
- Apr 24, 2025
- Cultures of Science
- Research Article
- 10.1177/20966083251338326
- Apr 24, 2025
- Cultures of Science
- Research Article
- 10.1177/20966083251324673
- Apr 22, 2025
- Cultures of Science
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.