Exploring teachers’ AI literacy: cognitive, pedagogical, ethical, and contextual insights from Indian schools
ABSTRACT This study investigates Indian school teachers’ AI literacy across cognitive, pedagogical, ethical, and contextual dimensions. Using an exploratory qualitative design, semi-structured interviews with 16 teachers from diverse school types and regions reveal that AI understanding is often superficial and tool-centric, with minimal hands-on exposure – especially in rural and government schools – indicating a critical cognitive-technical gap. Pedagogical use largely remains at substitution-level tasks (e.g. grading, content delivery), with little evidence of Substitution, Augmentation, Modification, and Redefinition (SAMR)-informed transformation. Ethical dilemmas – spanning data privacy, algorithmic bias, misinformation, and cultural misalignment – are heightened in settings lacking training and digital governance. Contextual disparities, including infrastructure, language accessibility, and regional policy support, further mediate engagement, with urban private schools showing relatively higher, though uneven, adoption. Findings highlight the need for equity-driven, culturally responsive, and ethically grounded professional development. Recommendations include embedding AI fundamentals and ethics into pre-service curricula, designing low-bandwidth multilingual training tools with simulations, piloting school-level data governance frameworks, and advancing regionally adaptive AI integration strategies – offering implications for other Global South contexts facing similar challenges.
- Research Article
1
- 10.4018/ijkm.382384
- Jun 25, 2025
- International Journal of Knowledge Management
With the rapid advancement of artificial intelligence (AI) in education, AI-enhanced learning is driving profound pedagogical shifts, offering personalized teaching and resource optimization while raising ethical concerns such as data privacy, algorithmic bias, intellectual property, transparency, and equity. Using bibliometric methods, the authors of this study systematically analyzed global research on AI ethics in education over the past decade, revealing its dynamic evolution. AI ethics in education research has grown exponentially, shifting from early technical feasibility studies to the ethical risks of generative AI in specific scenarios. However, research remains technology-centric, lacking focus on appropriate educational contexts. The international network is dominated by the United States, China, and European Union countries, with limited participation from developing nations. This study also examines ethical dilemmas and gaps in current research frameworks, aiming to provide insights for academics, policymakers, and future studies.
- Research Article
- 10.17576/ebangi.2025.2204.77
- Nov 30, 2025
- e-Bangi Journal of Social Science and Humanities
This paper investigates ethical issues of artificial intelligence (AI) within the field of Public Relations (PR), focusing on three interconnected dilemmas: algorithmic bias, data privacy, and professional responsibility. Based on a structured review of peer-reviewed literature, regulatory frameworks, and policy discussions, it offers a qualitative, interpretive analysis of how scholars and regulatory bodies perceive the ethical implications of AI adoption in strategic communication. The review pays particular attention to the Malaysian context, referencing the statutory frameworks such as the Communications and Multimedia Act 1998 (Act 588) and the Personal Data Protection Act 2010 (Act 709), and critically assesses aspects of the National AI Governance and Ethics (AIGE) Code, including its reliance on a voluntary framework. Results show increased awareness of AI's advantages in PR work, especially its use in media monitoring, which will be more effective and efficient in creating and curating content, as well as enhancing crisis communication. However, these benefits are offset by ethical risks. The literature consistently highlights issues like biased algorithmic behaviour, lack of transparency in decision-making, loss of human control, and weak protection of personal data. There is a notable gap between technological advancements and enforceable ethical principles, which is concerning regarding responsibility and transparency in AI-supported communication. Another weakness of the study is that it relies on secondary sources, which may be inadequate for forming a comprehensive picture of PR practitioners and the latest developments in the industry. Nonetheless, the analysis offers a valuable critique of how AI ethics are conceptualised in academia and regulatory discourse. Ultimately, the paper calls for more robust and context-specific regulatory frameworks and additional empirical research to explore how PR professionals encounter ethical dilemmas posed by AI in real-world practice. These insights aim to enrich academic debate and influence policymaking towards promoting responsible and accountable AI applications within the field of PR.
- Research Article
- 10.1108/cfw-07-2022-0019
- Sep 12, 2023
- The Case For Women
Learning outcomes This case is designed to enable students to understand the role of women in artificial intelligence (AI); understand the importance of ethics and diversity in the AI field; discuss the ethical issues of AI; study the implications of unethical AI; examine the dark side of corporate-backed AI research and the difficult relationship between corporate interests and AI ethics research; understand the role played by Gebru in promoting diversity and ethics in AI; and explore how Gebru can attract more women researchers in AI and lead the movement toward inclusive and equitable technology. Case overview/synopsis The case discusses how Timnit Gebru (She), a prominent AI researcher and former co-lead of the Ethical AI research team at Google, is leading the way in promoting diversity, inclusion and ethics in AI. Gebru, one of the most high-profile black women researchers, is an influential voice in the emerging field of ethical AI, which identifies issues based on bias, fairness, and responsibility. Gebru was fired from Google in December 2020 after the company asked her to retract a research paper she had co-authored about the pitfalls of large language models and embedded racial and gender bias in AI. While Google maintained that Gebru had resigned, she said she had been fired from her job after she had raised issues of discrimination in the workplace and drawn attention to bias in AI. In early December 2021, a year after being ousted from Google, Gebru launched an independent community-driven AI research organization called Distributed Artificial Intelligence Research (DAIR) to develop ethical AI, counter the influence of Big Tech in research and development of AI and increase the presence and inclusion of black researchers in the field of AI. The case discusses Gebru’s journey in creating DAIR, the goals of the organization and some of the challenges she could face along the way. As Gebru seeks to increase diversity in the field of AI and reduce the negative impacts of bias in the training data used in AI models, the challenges before her would be to develop a sustainable revenue model for DAIR, influence AI policies and practices inside Big Tech companies from the outside, inspire and encourage more women to enter the AI field and build a decentralized base of AI expertise. Complexity academic level This case is meant for MBA students. Social implications Teaching Notes are available for educators only. Subject code CCS 11: Strategy
- Research Article
- 10.29121/shodhsamajik.v2.i2.2025.53
- Dec 15, 2025
- ShodhSamajik: Journal of Social Studies
Artificial intelligence (AI) has rapidly transformed modern societies, permeating sectors ranging from health care and criminal justice to education and public administration. While AI systems promise efficiency and innovation, they simultaneously generate significant legal and ethical dilemmas. This article explores the dual nature of AI as both a driver of progress and a source of regulatory and moral challenges. Legally, the discussion addresses liability gaps in autonomous decision-making, algorithmic bias, and data ownership under emerging frameworks such as the EU AI Act and UNESCO’s Ethics of AI. Ethically, it examines tensions between human autonomy and technological determinism, accountability deficits in self-learning systems, and implications for human dignity. The research argues that effective AI governance must be grounded in transparency, human oversight, and moral accountability, integrating legal safeguards with ethical obligations. By adopting an “Ethics-by-Design” approach, societies can reconcile innovation with the imperatives of justice and human rights. The article concludes that only through a convergent legal and ethical framework can AI evolve as a genuinely responsible technology serving human welfare.
- Research Article
- 10.55041/isjem02417
- Mar 16, 2025
- International Scientific Journal of Engineering and Management
The rise of digital crimes has necessitated the development of online crime judgmental systems to ensure efficient, transparent, and timely justice. Traditional legal systems often struggle with delays, resource limitations, and complexities in handling cybercrimes, leading to a demand for technology-driven judicial solutions. Online crime judgmental systems integrate artificial intelligence (AI), blockchain, cloud computing, and digital forensics to streamline case proceedings, enhance evidence validation, and improve accessibility to justice. This paper explores the role of AI in legal decision-making, virtual courtrooms, automated case management, and cybersecurity in digital trials. It analyze the benefits of reducing judicial backlogs, accelerating verdicts, and enabling cross-border legal cooperation in handling cybercrimes. However, the study also highlights critical challenges, including data privacy concerns, algorithmic bias, ethical dilemmas, and the need for regulatory frameworks to maintain judicial integrity. The findings emphasize that while online crime judgmental systems offer significant advancements in criminal justice administration, their implementation requires a balanced approach that combines technology with human oversight. This research contributes to understanding how digital transformation can shape the future of law enforcement, crime investigation, and online dispute resolution, ensuring a more accessible and efficient justice system. KEYWORDS: Online Crime Judgment System , Cybercrime Justice ,Digital Legal System , AI in Criminal Justice , Virtual Court Proceedings Automated Crime Investigation, Blockchain for Legal Security Cyber Law and Digital Evidence, E-Judiciary and Online Courts, Artificial Intelligence in Law, Machine Learning in Legal Decision-Making, Online Dispute Resolution (ODR), Digital Forensics in Crime Judgment, Cloud-Based Crime Case Management, Cybersecurity in Online Crime Trials, Legal Tech for Criminal Justice, Judicial Automation and AI Ethics, Cross-Border Cybercrime Adjudication, Data Protection in Online Courts, Predictive Analytics in Criminal Law.
- Research Article
- 10.59298/iaajb/2025/1313743
- Aug 3, 2025
- IAA Journal of Biological Sciences
Artificial Intelligence (AI) has transformed healthcare by enhancing diagnostic accuracy, treatment personalization, and health service efficiency. However, mounting evidence reveals that AI systems can perpetuate or even amplify existing disparities related to race, gender, socioeconomic status, and geographic location. Biases often originate from imbalanced training datasets, flawed algorithm design, and unequal data collection practices. These biases have led to misdiagnoses, unequal resource allocation, and inadequate treatment recommendations, disproportionately affecting marginalized communities. This review explores the roots of algorithmic bias in healthcare AI, analyzing real-world examples such as COVID-19 triage systems and diagnostic tools that underperform in minority populations. It also examines mitigation strategies, including bias-aware data collection, algorithm design techniques, regulatory frameworks, and stakeholder engagement. Successful case studies and future research directions are presented, emphasizing fairness, transparency, and trust in computational medicine. Establishing robust, bias-resilient AI frameworks is critical to achieving equitable health outcomes and reinforcing the ethical foundations of digital health. Keywords: AI bias, health equity, algorithmic fairness, medical AI, healthcare disparities, machine learning, ethical AI, computational medicine.
- Research Article
8
- 10.51594/csitrj.v3i3.1559
- Dec 30, 2022
- Computer Science & IT Research Journal
As artificial intelligence (AI) increasingly integrates into various aspects of society, addressing bias in machine learning models and software applications has become crucial. Bias in AI systems can originate from various sources, including unrepresentative datasets, algorithmic assumptions, and human factors. These biases can perpetuate discrimination and inequity, leading to significant social and ethical consequences. This paper explores the nature of bias in AI, emphasizing the need for ethical AI practices to ensure fairness and accountability. We first define and categorize the different types of bias—data bias, algorithmic bias, and human-induced bias—highlighting real-world examples and their impacts. The discussion then shifts to methods for mitigating bias, including strategies for improving data quality, developing fairness-aware algorithms, and implementing robust auditing processes. We also review existing ethical guidelines and frameworks, such as those proposed by IEEE and the European Union, which provide a foundation for ethical AI development. Challenges in identifying and addressing bias are examined, such as the trade-offs between fairness and model accuracy, and the complexities of legal and regulatory requirements. Future directions are considered, including emerging trends in ethical AI, the importance of interdisciplinary collaboration, and innovations in bias detection and mitigation. In conclusion, ongoing vigilance and commitment to ethical practices are essential for developing AI systems that are equitable and just. This paper calls for continuous improvement and proactive measures from developers, researchers, and policymakers to create AI technologies that serve all individuals fairly and without bias. Keywords: Ethical AI, Bias, Machine Learning, Models, Software Applications.
- Research Article
5
- 10.1080/18146627.2016.1183301
- Jan 2, 2016
- Africa Education Review
ABSTRACTThis study investigated the perceptions and experiences of rural school principals in South Africa of the role that parents in the school governing bodies (SGBs) play in improving school management and governance. The study reports on a literature review as well as on the empirical investigation, which was based on a qualitative research paradigm. Semi-structured interviews with the principals of three different rural schools were employed to collect data. The literature findings revealed that including parents as part of the SGB is seen as an essential component for the successful functioning of the school. The empirical study also emphasised the importance of including parents. However, the principals were concerned about the fact that many members of the SGB are illiterate and uncertain of the role they play in school governance. The principals emphasised the need for training of the members of SGBs as regards their working knowledge of school governance activities.
- Research Article
46
- 10.1007/s43681-021-00065-0
- Jun 6, 2021
- AI and Ethics
This paper argues that the field of artificial intelligence (AI) ethics needs to give more attention to the values and interests of nonhumans such as other biological species and the AI itself. It documents the extent of current attention to nonhumans in AI ethics as found in academic research, statements of ethics principles, and select projects to design, build, apply, and govern AI. It finds that the field of AI ethics gives limited and inconsistent attention to nonhumans, with the main activity being a line of research on the moral status of AI. The paper argues that nonhumans merit moral consideration, meaning that they should be actively valued for their own sake and not ignored or valued just for how they might benefit humans. Finally, it explains implications of moral consideration of nonhumans for AI ethics research and practice, including for the content of AI ethics principles, the selection of AI projects, the accounting of inadvertent effects of AI systems such as via their resource and energy consumption and potentially certain algorithmic biases, and the research challenge of incorporating nonhuman interests and values into AI system design. The paper does not take positions on which nonhumans to morally consider or how to balance the interests and values of humans vs. nonhumans. Instead, the paper makes the more basic argument that the field of AI ethics should move from its current state of affairs, in which nonhumans are usually ignored, to a state in which nonhumans are given more consistent and extensive moral consideration.
- Research Article
- 10.20491/isarder.2025.2023
- Jun 29, 2025
- Journal of Business Research - Turk
Purpose-In the contemporary era, all organizations are affected by technological developments. Organizations that are able to integrate new technologies, including artificial intelligence (AI), into their processes can gain a competitive advantage.In today’s fast-moving landscape the strategic use of AI and other emerging technologies in human resource management (HRM) can enhance HR’s role in overall organizational performance. The present study examines the impact of new technologies on the efficacy of modern HRM activities. Design/methodology/approach-This study uses a qualitative research design based on a phenomenological approach to explore HR professionals' experiences with AI and other current technologies they use. Semi-structured interviews were conducted with eight HR professionals and managers from various organizations. The research aims to understand participants' perspectives on how AI and digital tools are reshaping HR functions today.Findings -The findings indicate that adopting new technologies, including AI,in HRM have the potential to enhance efficiency, optimize decision-making processes, and reduce error rates in recruitment and performance appraisal. The study also discovers that a comparison of Turkish firms' AI governance practices with global trends reveals a shift towards efficiency over ethical considerations.Discussion -This study highlights the transformative role of new technologies, including artificial intelligence, in HRM, signaling that they improve efficiency, decision-making and workforce productivity. However, it also points out that digital surveillance, algorithmic bias and data privacy can create ethical dilemmas. While attention is also paid to AI ethics in the global literature, this pioneering researchstresses that Turkish companies prioritize automation, underscoring the need for localized policies that balance technology with responsible practice.
- Research Article
- 10.24843/jnp.v3i1.342
- Mar 31, 2025
- JURNAL NAWALA POLITIKA
Artificial Intelligence (AI) technologies are central to global digital transformation, promising efficiency and improved decision-making. However, algorithmic bias, systematic and unfair discrimination embedded in AI remains a pressing concern, especially in the Global South where these technologies are often deployed without contextual adaptation. This paper examines how data and value systems from the Global North shape AI development, contributing to unfair outcomes in developing countries. Using a qualitative literature review grounded in critical data studies and postcolonial theory, it explores digital colonialism and AI systems misaligned with local socio-cultural realities. Key challenges include lack of representative datasets, cultural misalignment, and weak regulatory frameworks, leading to exclusion and discrimination. The study advocates for a human rights-centered, context-sensitive AI governance framework emphasizing transparency, local participation, ethical pluralism, and capacity-building. Reframing algorithmic bias as a socio-political issue highlights the urgent need for systemic transformation to ensure AI promotes equitable and just outcomes globally.
- Book Chapter
- 10.62311/nesx/97991
- Feb 27, 2025
Abstract: As Artificial Intelligence (AI) becomes increasingly integrated into digital ecosystems, ensuring security and trust in AI-driven systems is paramount. This chapter explores the growing challenges posed by deepfakes, misinformation, and algorithmic bias, which threaten public trust, democratic integrity, and ethical AI adoption. Deepfake technology enables the manipulation of media, leading to fraud, identity theft, and political disinformation, while AI-driven misinformation amplifies fake news and biased narratives through social media algorithms. Additionally, algorithmic bias in hiring, law enforcement, and finance raises concerns about discrimination and fairness in AI decision-making. To counter these threats, AI security strategies—including deepfake detection, fact-checking AI models, fairness-aware algorithms, and cybersecurity measures—are being developed to ensure responsible AI governance. This chapter examines real-world applications, case studies from Google, IBM, Facebook, and OpenAI, and the role of regulations, AI ethics, and transparency in mitigating AI-related risks. Looking forward, the future of AI governance requires a collaborative approach between industry, academia, and policymakers to develop trustworthy, fair, and secure AI systems that benefit society while minimizing risks. Keywords: AI security, trust in AI, deepfakes, misinformation, algorithmic bias, AI ethics, fairness in AI, AI governance, AI transparency, adversarial attacks, explainable AI, cybersecurity, AI-driven misinformation, AI regulations, AI fairness, AI-driven trust.
- Single Book
- 10.62311/nesx/rb978-81-981179-1-5
- Nov 30, 2024
Abstract: This research book examines the convergence of artificial intelligence (AI) and organizational leadership within the context of digital transformation, crisis management, and virtual organizational ecosystems. It addresses a critical gap in contemporary leadership studies by exploring how AI reshapes strategic decision-making, crisis response, and distributed team coordination in volatile, uncertain, complex, and ambiguous (VUCA) environments. Building on interdisciplinary frameworks from adaptive leadership theory, cybernetics, organizational psychology, and computer science, the book develops an integrated conceptual model for AI-driven leadership. The methodology involves qualitative synthesis and conceptual modeling supported by empirical case analysis across sectors such as healthcare, cybersecurity, and smart governance. The book analyzes the implementation of AI-enabled systems—including predictive analytics, intelligent agents, and algorithmic governance tools—in enhancing crisis foresight, automating processes, and improving collaboration in virtual settings. It also critically examines ethical concerns such as algorithmic bias, accountability diffusion, and decision opacity, offering normative frameworks for responsible AI integration. Key findings reveal that AI not only augments leadership capacity but also redefines authority, collaboration, and trust in organizational networks. The book’s contribution lies in articulating a blueprint for human-AI synergy, ethical algorithmic leadership, and resilient, adaptive frameworks suitable for complex socio-technical systems. Its implications extend to organizational strategy, leadership education, public policy, and digital governance in a post-pandemic, AI-mediated world. Keywords: AI-driven leadership, crisis management, virtual organizations, adaptive leadership, algorithmic decision-making, digital transformation, machine learning, ethical AI, intelligent collaboration, human-AI synergy, organizational resilience, socio-technical systems, VUCA environments, leadership ethics, cybernetic governance
- Research Article
- 10.59333/mucin.e8.3
- Mar 31, 2024
- REVISTA MUCIN
The ethics of artificial intelligence is a dynamic for the evaluation and regulatory orientation of artificial intelligence technologies, the objective of which is to analyze the ethics of artificial intelligence in the educational field. Furthermore, artificial intelligence has taken a crucial role in research ethics, as it poses ethical challenges and dilemmas that must be carefully addressed, massive data collection and people’s privacy are topics of great relevance. At the same time, it is essential to consider algorithmic bias and automated decision-making that may affect specific groups in society. On the other hand, it is essential to guarantee the privacy and security of student data, as well as address equity in access to technology and avoid exclusion or discrimination based on algorithms. Likewise, artificial intelligence is revolutionizing education by providing tools and resources that transform the way we teach and learn. Finally, personalization of learning, virtual tutoring, automated assessment, and intelligent educational resources are just some of the applications of AI that are improving the quality and accessibility of education. However, it is important to address the ethical and social challenges associated with its implementation. Keywords: Ethics, artificial intelligence, education
- Research Article
- 10.1093/geroni/igae098.3888
- Dec 31, 2024
- Innovation in Aging
Artificial Intelligence (AI) technology is advancing at a rapid pace. In addition, our older adult cohort is growing at an unprecedented rate. The purpose of my research is to address the ethical concerns when AI intersects with this population, offering policy recommendations to bridge this ethical digital divide. To address this AI ethical dilemma, my research did not reach back prior to 2019 in order to provide a concise policy analysis that includes existing AI ethical policies/recommendations from the European Union, United States, and U.S. States, key differences, and how ethical AI policies are central to Area Agencies on Aging’s development of aging services. My findings indicated gaps in ethical AI design with developed policy recommendations. These gaps include inclusive data collection, user-centric design, algorithmic biases and understanding, digital literacy, New Jersey AI Task Force membership, and a need for a New Jersey AI Ethical Framework. With my five policy recommendations, New Jersey will be better positioned in assuring AI systems will be equitable, ethical, and effective in addressing the present and future needs of older adults. With this ethical AI framework, New Jersey will help to better older adults’ quality of life, as well as save valuable time and resources through the new industrial revolution that is Artificial Intelligence. Keywords: Artificial Intelligence, ethics, older adults, algorithmic bias, AI ageism
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.