• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Related Topics

  • Health Care Ethics
  • Health Care Ethics
  • Ethical Governance
  • Ethical Governance
  • Engineering Ethics
  • Engineering Ethics
  • Ethical Guidelines
  • Ethical Guidelines
  • Computer Ethics
  • Computer Ethics
  • Business Ethics
  • Business Ethics
  • Medical Ethics
  • Medical Ethics

Articles published on Ai ethics

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
3457 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.1186/s13040-025-00505-1
A fairness-aware machine learning framework for maternal health in Ghana: integrating explainability, bias mitigation, and causal inference for ethical AI deployment.
  • Dec 5, 2025
  • BioData mining
  • Augustus Osborne + 1 more

A fairness-aware machine learning framework for maternal health in Ghana: integrating explainability, bias mitigation, and causal inference for ethical AI deployment.

  • New
  • Research Article
  • 10.1108/tg-08-2025-0240
Generative AI and the urban AI policy challenges ahead: Trustworthy for whom?
  • Dec 4, 2025
  • Transforming Government: People, Process and Policy
  • Igor Calzada

Purpose This study aims to critically examine the socio-technical, economic and governance challenges emerging at the intersection of Generative artificial intelligence (AI) and Urban AI. By foregrounding the metaphor of “the moon and the ghetto” (Nelson, 1977, 2011), the issue invites contributions that interrogate the gap between technological capability and institutional justice. The purpose is to foster a multidisciplinary dialogue–spanning applied economics, public policy, AI ethics and urban governance – that can inform trustworthy, inclusive and democratically grounded AI practices. Contributors are encouraged to explore not just what GenAI can do, but for whom, how and with what consequences. Design/methodology/approach This study draws upon interdisciplinary literature from public policy, innovation studies, digital governance and urban sociology to frame the emerging governance challenges of Generative AI and Urban AI. It builds a conceptual foundation by synthesizing insights from comparative city case studies, innovation systems theory and normative policy frameworks. The approach is interpretive and exploratory, aiming to situate AI technologies within broader institutional, geopolitical and socio-economic contexts. The study invites contributions that adopt empirical, theoretical or practice-based methodologies addressing the governance of GenAI in cities and regions. Findings This study identifies a critical gap between the rapid technological advancements in Generative AI and the institutional readiness of public governance systems – particularly in urban contexts. It finds that current policy frameworks often prioritize efficiency and innovationism over democratic legitimacy, civic trust and inclusive design. Drawing on comparative global city experiences, it highlights the risk of reinforcing power asymmetries without robust accountability mechanisms. The analysis suggests that trustworthy AI is not a purely technical attribute but a political and institutional achievement, requiring participatory governance architectures and innovation systems grounded in public value and civic engagement. Research limitations/implications As an editorial introduction, this study does not present original empirical data but synthesizes key theoretical frameworks, case studies and policy debates to guide future research. Its analytical scope is conceptual and comparative, offering a foundation for submissions that further investigate Generative and Urban AI through empirical, normative and practice-based lenses. The limitations lie in its broad coverage and reliance on secondary sources. Nonetheless, it provides an agenda-setting contribution by highlighting the urgent need for interdisciplinary research into how AI reshapes public governance, institutional legitimacy and urban democratic futures. Practical implications This editorial offers a structured framework for policymakers, urban planners, technologists and public administrators to critically assess the governance of Generative and Urban AI systems. By highlighting international case studies and conceptual tools – such as public algorithmic infrastructures, civic trust frameworks and anticipatory governance – the article underscores the importance of institutional design, regulatory foresight and civic engagement. It invites practitioners to shift from techno-solutionist approaches toward inclusive, democratic and place-based AI governance. The reflections aim to support the development of trustworthy AI policies that are grounded in legitimacy, accountability and societal needs, particularly in urban and regional contexts. Social implications The editorial underscores that Generative and Urban AI systems are not socially neutral but carry significant implications for equity, representation and democratic legitimacy. These technologies risk reinforcing existing social hierarchies and systemic biases if not governed inclusively. This study calls for reimagining trust not as a technical feature but as a relational, contested dynamic between institutions and citizens. It encourages submissions that examine how AI reshapes the urban social contract, affects marginalized communities and challenges existing civic infrastructures. The goal is to promote AI governance frameworks that are pluralistic, just and reflective of diverse societal values and lived experiences. Originality/value This editorial offers a timely and conceptually grounded intervention into the emerging field of Urban AI and Generative AI governance. By framing the challenges through Richard R. Nelson’s metaphor of The Moon and the Ghetto, this study foregrounds the gap between technical capabilities and enduring societal injustices. The contribution lies in its interdisciplinary synthesis – bridging innovation systems, AI ethics, public policy and urban governance. It introduces a critical framework for assessing “trustworthy AI” not as a technical goal but as a democratic achievement and encourages research that is policy-relevant, equity-oriented and attuned to the institutional realities of AI in cities.

  • New
  • Research Article
  • 10.34190/icair.5.1.4361
Preliminary Study of TexAI: Where Adaptive AI Reimagines Law Enforcement Training
  • Dec 4, 2025
  • International Conference on AI Research
  • Shreyas Kumar + 3 more

Law enforcement agencies today operate at the frontline of data-sensitive decision-making, yet their trainingsystems remain alarmingly analog. This gap has far-reaching consequences: The Police Department unintentionally deletedover eight terabytes of digital evidence, affecting nearly 17,000 criminal cases and causing significant public backlash andjudicial delays (NBC 5 Dallas-Fort Worth, 2019). The root of this crisis lies not in technology alone, but in an outdated trainingparadigm that fails to prepare officers for the ethical, operational, and procedural demands of an AI-driven society. Thispaper explores how adaptive, explainable AI (XAI) can reframe the relationship between law enforcement and digitalgovernance. We present TEXAI (XAI-powered Knowledge Base for Texas Law Enforcement), an AI-powered prototype builtto modernize cybersecurity training in policing. Developed through user interviews and field research, the app combinesreal-time regulation updates with personalized, scenario-based microlearning-targeting a key challenge: officers forgettingor misunderstanding complex, evolving legal protocols. Our research examines how integrating XAI principles into lawenforcement workflows introduces not only technological efficiency but critical epistemological transparency, fosteringinstitutional accountability. We situate this intervention in the broader context of AI's role in public-sector transformation,arguing that ethical deployment of adaptive systems is essential to restoring public trust and preventing catastrophic humanerror. TEXAI also functions as a case study for how context-aware, role-specific AI tools can evolve through participatorydesign-responding to both human vulnerability and structural inefficiency. We contrast our solution with existing nationalsystems such as PoliceOne Academy and Axon Academy, highlighting a novel intersection between AI explainability, justicesystem integrity, and digital literacy. The implications extend beyond law enforcement: in demonstrating how adaptive AIcan personalize and democratize professional training in real time, we propose a scalable model for AI's responsibleintegration into high-stakes, socially critical domains. This work contributes to growing discourse around ethical AI, resiliencein digital infrastructure, and the future of labor in AI-mediated institutions.

  • New
  • Research Article
  • 10.34190/icair.5.1.4344
AGS-INTEL: Authentic & Granular Source for Data Breach Intelligence
  • Dec 4, 2025
  • International Conference on AI Research
  • Anil Parthasarathi + 2 more

As artificial intelligence reshapes the cybersecurity landscape, the demand for a trustworthy, real-time intelligence platform to track security incidents has become mission-critical. This paper proposes AGS-INTEL, an AI-driven platform designed to revolutionize data breach intelligence by providing a credible, real-time repository that consolidates, verifies, and contextualizes global security incidents. Unlike traditional databases, AGS-INTEL employs a validated scoring algorithm and enriched metadata to capture breach dimensions (legal, technical, sectoral, geopolitical), drawing from GDPR/HIPAA disclosures, threat intelligence, dark web forums, and academic reports, among other sources. Utilizing NLP and agentic AI, it extracts structured metadata from unstructured narratives while integrating ethical data scraping, regulatory compliance, and cross-jurisdictional filtering to ensure high fidelity. A visual analytics dashboard empowers stakeholders, including regulators, policymakers, cybersecurity professionals, and journalists, to analyze breach trends by industry, geography, and threat modality, enhancing transparency and risk governance. By delivering authenticated, actionable data, AGS-INTEL addresses critical gaps in existing tools, setting a new standard for ethical AI in breach intelligence and strengthening societal resilience against escalating cyber threats.

  • New
  • Research Article
  • 10.54254/2755-2721/2025.30290
Robust Causal Detection of Generative AI Manipulation in ESG Disclosures
  • Dec 3, 2025
  • Applied and Computational Engineering
  • Runrun Lei

Due to the increasing application of generative AI in the field of finance, the authenticity of ESG disclosures of companies has become even more prominent. On one hand, generative AI technology can improve the consistency and persuasive capability of texts, while potentially being misused to misrepresent genuine risks to the environment, overstate commitments to society, or manipulate governance stories in generative greenwashing. To resolve such problems arising from generative AI technology in ESG disclosure authenticity, this work creates a new causal inference and robust learning model that models ESG text features in the proposed model as the latent causal variables, demonstrating the impact of AI generative manipulation on semantic shifts and causal relationships. Based on the Refinitiv ESG dataset, the Bloomberg ESG dataset, GPT-4, and Claude 3 data, it was demonstrated that the proposed model outperformed state-of-the-art models with an AUC of 0.957 and an F1 score of 0.941 with an average standard deviation of 0.5%. The study demystifies the manipulation mechanisms of generative AI in ESG stories and offers analytic tools to trace AI generative manipulation in the field of finance for cross-sector integration between AI ethics and finance regulation.

  • New
  • Research Article
  • 10.14195/2184-9781_5_1
Rhizome bundles, multiple agencies, and ascription
  • Dec 2, 2025
  • Undecidabilities and Law
  • Edmundo Balsemão-Pires

Starting with a review of G. Deleuze’s and F. Guattari’s metaphysical and methodological assumptions in Mille-Plateaux, this paper aims to critically appraise Bruno Latour’s rhizomatic Epistemology, particularly in agency and network formation, and its Leibnizian inspiration, as mediated by G. de Tarde and G. Deleuze. It also seeks to evaluate the soundness of some of ANT’s metaphysical assumptions, such as the metaphysical primacy of forces and forces irreducibility, in light of the increasing participation of artificial agents in communication and social interaction and the growing technological transformation of the 'natural attitude.' The meaning of artificial agency is the empirical perspective through which I will evaluate ANT’s epistemological and metaphysical claims. The paper will define artificial agents and artificial agency and describe the social context of the 'pool of agents' that includes humans and machines in digital networks of human-machine interactions. The normative themes of causality, accountability, and responsibility of artificial agents, central to the Ethics of artificial intelligence, will also be explored within the critical appraisal of ANT’s description of networks.

  • New
  • Research Article
  • 10.11591/ijict.v14i3.pp960-971
Legal challenges of artificial intelligence in the European Union’s digital economy
  • Dec 1, 2025
  • International Journal of Informatics and Communication Technology (IJ-ICT)
  • Volodymyr I Kudin + 4 more

This article critically examines the legal and regulatory challenges posed by artificial intelligence (AI) within the European Union’s (EU) digital economy, focusing on the recently adopted EU Ai Act (Regulation 2024/1689). While previous studies have addressed AI's ethical and theoretical dimensions, this research fills a gap by analyzing the Act’s practical application across its temporal, personal, material, and territorial scopes. The study adopts a qualitative legal methodology, integrating doctrinal interpretation, comparative analysis, and systemic review of EU and international legal instruments. Key findings reveal that the EU AI Act establishes a pioneering risk-based regulatory framework, distinguishing itself through strong safeguards for fundamental rights, transparency, and human oversight. However, critical limitations remain, including ambiguous high-risk classifications, reliance on provider self-assessment, and exemptions for national security that may undermine human rights protections. The article compares the EU approach with those of the United States and China, illustrating divergent models of AI regulation and their global implications. It argues that while the EU AI Act sets an ambitious precedent, its success depends on effective enforcement, stakeholder compliance, and international regulatory convergence. By addressing these challenges, the EU can shape a globally influential framework for ethical and responsible AI deployment. This study contributes to the evolving academic and policy debate on AI governance by offering practical insights and recommendations for future research and legal development.

  • New
  • Research Article
  • 10.1016/j.pop.2025.07.009
Ethical and Legal Considerations of Medical Artificial Intelligence.
  • Dec 1, 2025
  • Primary care
  • Palmer Montalbano

Ethical and Legal Considerations of Medical Artificial Intelligence.

  • New
  • Research Article
  • 10.1152/advan.00119.2025
Concepts behind clips: cinema to teach the science of artificial intelligence to undergraduate medical students.
  • Dec 1, 2025
  • Advances in physiology education
  • Krishna Mohan Surapaneni

As artificial intelligence (AI) is becoming more integrated into the field of healthcare, medical students need to learn foundational AI literacy. Yet, traditional, descriptive teaching methods of AI topics are often ineffective in engaging the learners. This article introduces a new application of cinema to teaching AI concepts in medical education. With meticulously chosen movie clips from "Enthiran (Tamil)/Robot (Hindi)/Robo (Telugu)" movie, the students were introduced to the primary differences between artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super intelligence (ASI). This method triggered encouraging responses from students, with learners indicating greater conceptual clarity and heightened interest. Film as an emotive and visual medium not only makes difficult concepts easy to understand but also encourages curiosity, ethical consideration, and higher order thought. This pedagogic intervention demonstrates how narrative-based learning can make abstract AI systems more relatable and clinically relevant for future physicians. Beyond technical content, the method can offer opportunities to cultivate critical engagement with ethical and practical dimensions of AI in healthcare. Integrating film into AI instruction could bridge the gap between theoretical knowledge and clinical application, offering a compelling pathway to enrich medical education in a rapidly evolving digital age.NEW & NOTEWORTHY This article introduces a new learning strategy that employs film to instruct artificial intelligence (AI) principles in medical education. By introducing clips the from "Enthiran (Tamil)/Robot (Hindi)/Robo (Telugu)" movie to clarify artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super intelligence (ASI), the approach converted passive learning into an emotionally evocative and intellectually stimulating experience. Students experienced enhanced comprehension and increased interest in artificial intelligence. This narrative-driven, visually oriented process promises to incorporate technical and ethical AI literacy into medical curricula with enduring relevance and impact.

  • New
  • Research Article
  • 10.1016/j.iccn.2025.104213
The Ideal Human Care in Green ICU: An integrated AI framework for future ICU care.
  • Dec 1, 2025
  • Intensive & critical care nursing
  • Masoud Arabfard + 5 more

The Ideal Human Care in Green ICU: An integrated AI framework for future ICU care.

  • New
  • Research Article
  • 10.1016/j.pop.2025.07.002
Artificial Intelligence in Risk Assessment and Prevention.
  • Dec 1, 2025
  • Primary care
  • Rosemary Nabaweesi + 3 more

Artificial Intelligence in Risk Assessment and Prevention.

  • New
  • Research Article
  • 10.1016/j.joitmc.2025.100648
Ethical AI integration in municipal self-service technologies: A case study from UAE public sector transformation
  • Dec 1, 2025
  • Journal of Open Innovation: Technology, Market, and Complexity
  • Muath Alyileili + 1 more

Ethical AI integration in municipal self-service technologies: A case study from UAE public sector transformation

  • New
  • Research Article
  • 10.1016/j.sftr.2025.101126
The philosophy of cognitive diversity: Rethinking ethical AI design through the lens of neurodiversity
  • Dec 1, 2025
  • Sustainable Futures
  • Joffrey Baeyaert

The philosophy of cognitive diversity: Rethinking ethical AI design through the lens of neurodiversity

  • New
  • Research Article
  • 10.53591/iti.v17i24.2567
<b>Del Leviatán al algoritmo: reflexiones sobre el Estado y la inteligencia artificial</b>
  • Nov 30, 2025
  • Investigación, Tecnología e Innovación
  • Alexander Bellafiore

Context: Artificial intelligence, being more efficient, intelligent, objective, and practical than the average person, can replace humans in many productive and administrative tasks, including public service and government management in States. Objective: To reflect on the potential applications of artificial intelligence in the State with partial or absolute autonomy to evaluate its relevance and its ethical and social implications in accordance with the political ideas of Thomas Hobbes in his work Leviathan and Theodore Kaczynsky in The Industrial Society and Its Future. Method: Documentary review of primary sources on the political ideas in the aforementioned works, as well as on relevant documents that deal with the ethics of artificial intelligence in the State issued by international organizations. Results: The implementation of artificial intelligence with autonomy in governments is an idea originated in the need to optimize human administration of government. Reflections: The integration of artificial intelligence in governments generates risks and resistance on the part of political and social elites due to their impetus and desire for power.

  • New
  • Research Article
  • 10.62338/0rw0jt51
Generational Perceptions of Digital Ethics and Employee Well-Being in AI-Enabled Health and IT Workplaces: A Systematic Review
  • Nov 30, 2025
  • The Maldives National Journal of Research
  • Sana Naz + 1 more

In 21st-century workplaces, where the pace of digital change is fast, ethical issues are exponential, with 70% workers indicating unease due to AI. In particular, millennials and Gen Z expressed 30% more ethical and trust-related concerns in AI workplaces. Although AI rapid integration in workplaces has increased lately, including Healthcare and IT, generational differences in digital ethics perceptions remain a neglected aspect of influencing employee well-being, and that is what this review seeks to determine with the following objectives: i) to synthesize existing research on generational attitudes toward digital ethics in AI-driven healthcare and IT workplaces. ii) to examine the relationship between generational attitudes toward digital ethics and employee well-being, including stress, trust, and job satisfaction, among employees aged 24 to 55 in AI-driven health and IT workplaces, and iii) to identify research gaps and provide practical recommendations for organizations to foster ethical AI adoption through training, clear policies, and inclusive practices in multigenerational workplaces. Following the PRISMA framework, a systematic review was conducted, and data were sourced across three databases, i.e., ScienceDirect, PubMed, and Google Scholar, using keywords “digital ethics”, generational differences”, well-being, and AI workplaces with Boolean operators. A total of 33 full-text studies were included that met the inclusion criteria. The results showed a significant generational disparity in the interpretation of digital ethics, with the younger employees being more accepting and the older generation being more concerned about AI-related privacy and transparency. Such perceptual differences affect employees’ psychological well-being, trust, stress, and job satisfaction, more particularly in the field of healthcare, regarding ethical sensitivity related to patient data privacy.

  • New
  • Research Article
  • 10.47941/ijce.3352
The Evolving Role of HR Technology Professionals in the AI Era
  • Nov 29, 2025
  • International Journal of Computing and Engineering
  • Ramesh Nyathani

The rapid adoption of Artificial Intelligence (AI) across Human Resources has transformed the expectations, responsibilities, and required skillsets of HR Technology professionals. Historically viewed as system administrators and process enablers, HR technology teams are now becoming strategic architects of digital workforce transformation. As organizations increasingly rely on cloud-based HR platforms, automation, and real-time data intelligence, HR Tech professionals play a critical role in driving efficiency, improving employee experience, and shaping data-driven decision-making [1]. This paper explores how AI reshapes core HR functions—recruitment, onboarding, employee self-service, learning, performance management, and workforce planning—and examines the evolving responsibilities of those who manage these technologies. Beyond configuration and support, HR Tech teams now govern data models, manage AI-driven workflows, ensure cybersecurity compliance, and design consumer-grade digital experiences. The shift demands new competencies: analytics literacy, process automation, UX awareness, ethical AI governance, and vendor integration expertise. Looking ahead to 2026, the paper projects the emergence of specialized roles such as People Analytics Scientists, HR Automation Engineers, Employee Experience Designers, and HR Systems Product Owners. HR Service Desks will be augmented or replaced by AI-enabled chatbots, predictive models will forecast turnover and engagement risks, and onboarding will become hyper-personalized and autonomous.

  • New
  • Research Article
  • 10.1007/s43681-025-00851-0
Development of AI ethics guidelines model based on AI life cycle
  • Nov 27, 2025
  • AI and Ethics
  • Nakyung Lee

Abstract The increasing adoption of artificial intelligence (AI) requires the development of robust ethical frameworks to address emerging ethical challenges. Existing AI ethical guidelines are predominantly abstract and lack actionable requirements for practical implementation. This study addresses this gap by structuring AI ethics guidelines according to the AI life cycle, thereby identifying specific ethical requirements for each stage. AI ethics guidelines from seven leading countries were compared using a refined six-stage AI life cycle model. The analysis demonstrates that all seven countries emphasize the Model Development and Monitor & Evaluate Performance stages, while significant gaps persist in ethical guidance for other stages. Based on these findings, this study proposes a standardized AI ethics framework aligned with the AI life cycle, translating abstract ethical principles into concrete, actionable requirements and establishing a normative basis for ethical accountability.

  • New
  • Research Article
  • 10.1108/jices-06-2025-0138
A theoretical model of AI bias mitigation: incentives, regulation and equilibrium
  • Nov 27, 2025
  • Journal of Information, Communication and Ethics in Society
  • Subhasis Bera + 1 more

Purpose This study aims to develop a conceptual economic model to analyse bias in artificial intelligence (AI), decomposing it into functionality bias (arising from data and algorithms) and usage bias (stemming from user incentives). It explores how these biases interact and emerge endogenously in economic systems, proposing a cost-constrained optimisation framework for mitigation. Design/methodology/approach A mathematical optimisation model is formulated to minimise total bias, accounting for data curation costs, regulatory penalties and audit expenses. The model derives equilibrium conditions under which bias mitigation is cost-effective and identifies thresholds beyond which interventions fail. Findings Reducing usage bias enhances the effectiveness of functionality bias mitigation, creating a cascading fairness effect. High regulatory costs without incentive alignment can discourage mitigation, emphasising the need for balanced interventions. The model highlights the cost-fairness trade-offs and suggests critical thresholds for policy action. Research limitations/implications The theoretical conceptual model is static; future work should explore dynamic extensions and conduct empirical validation to enhance its applicability. Nonetheless, it provides a quantitative foundation linking AI ethics with economic policymaking. Practical implications Firms and policymakers can use the conceptual model to evaluate cost-efficient strategies for mitigating bias, particularly through early-stage interventions and the design of usage incentives. Social implications The study emphasises the importance of fairness in AI as both a technical and socioeconomic objective, essential for achieving equitable outcomes in areas such as finance, employment and public services. Originality/value Unlike prior approaches that treat fairness as exogenous, this study endogenises bias within an economic decision framework, presenting a novel lens to address AI fairness as a constrained optimisation problem.

  • New
  • Research Article
  • 10.1108/jices-03-2025-0069
Bridging AI ethics between communication and computer science: a care ethics approach to foster organization-public relationships
  • Nov 26, 2025
  • Journal of Information, Communication and Ethics in Society
  • Xiufang (Leah) Li + 2 more

Purpose Ethics enables organizations to effectively resolve dilemmas while acting socially responsible. This study aims to examine how current communication practices involving AI technologies align with domain-specific Generative AI (GenAI) guidelines to foster the quality of organizational-public relationships (OPR). Design/methodology/approach The discussion on ethical principles governing AI in computer science, along with the conceptualization of OPR linked to professional codes of ethics, informed by the feminist philosophy known as the ethics of care, contributes to the development of a proposed AI ethics framework to foster OPR. Drawing upon this framework, the use of content analysis to unpack industry discourse reveals the extent to which the industry’s understanding of GenAI ethics aligns with this proposed framework. The implications of implementing this framework to foster OPR are discussed. Findings Communication professionals view social responsibility and authenticity crucial for ensuring ethical AI practice, with truthfulness, respect and equity following closely. However, adherence to ethical AI use in communication depends on the implementation of explainability, accuracy, fairness and machine autonomy in computer science. Embracing the ethics of care to integrate these ethical principles into the current AI ethics framework in communication becomes crucial for easing this tension. Originality/value The proposed AI ethics framework bridges AI ethics between communication and computer science by capturing social responsibility, authenticity, truthfulness, respect and equity. This framework helps shape professional codes of ethics to address challenges in the rapidly evolving AI-driven communication landscape and advocates for the engineering of responsible AI tools to foster quality OPR. The outcomes advance cross-disciplinary research and cross-sectoral practices of AI ethics by leveraging the ethics of care, thereby connecting AI ethics across computer science and other fields.

  • New
  • Research Article
  • 10.3389/fcomp.2025.1682190
AI-assisted academic cheating: a conceptual model based on postgraduate student voices
  • Nov 26, 2025
  • Frontiers in Computer Science
  • Nguyen Van Hanh + 1 more

Introduction As AI tools become more widely used in higher education, concerns about AI-assisted academic cheating are growing. This study examines how postgraduate students interpret these behaviors. Methods We conducted an exploratory qualitative study. We analyzed ten ten course-embedded reflective essays using conventional content analysis and identified 159 meaning units, 34 codes, 12 categories, and 6 themes. Results Students described two main forms of AI-assisted cheating: misusing AI to complete academic tasks and improperly using AI-generated content. They attributed these behaviors to work pressure, ethical ambiguity, AI affordances, and gaps in institutional policies. They also proposed solutions, including clearer guidelines, improved assessment design, and stronger ethics education. Discussion The findings show that students construct their views on AI-assisted cheating within social, technological, and institutional contexts. Strengthening policy clarity and promoting a culture of ethical AI use can help institutions address these emerging challenges.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers