Related Topics
Articles published on Academic integrity
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
4825 Search results
Sort by Recency
- New
- Research Article
- 10.61643/c77904
- Mar 19, 2026
- The Pinnacle: A Journal by Scholar-Practitioners
- Jenna Obee
The rapid adoption of generative artificial intelligence (GAI) has introduced both opportunities and inequities in higher education. While these technologies can personalize learning, improve administrative efficiency, and expand access to information, they also magnify long-standing digital disparities. Digital inequality, historically defined by differences in access, autonomy, skill, social support, and purpose, now extends into the realms of algorithmic fluency and tool literacy. Drawing on Bourdieu’s theory of cultural capital and Actor-Network Theory, the analysis examines how inequitable access to and understanding of GAI affect students, educators, and institutions. The discussion highlights ethical and policy challenges, including bias, privacy, and academic integrity, and proposes a framework for equitable GAI adoption centered on access, literacy, and ethics. The article concludes that higher education leaders must approach GAI integration with a focus on transparency, adaptability, and inclusion to ensure that digital transformation enhances, rather than reinforces, educational inequality.
- Research Article
- 10.24093/awej/vol17no1.15
- Mar 15, 2026
- Arab World English Journal
- Inam Ghalib Sheekhoo Al-Azzawi
The current paper examines the Arab EFL teacher view on the application of AI-based chatbots as a method of aiding writing instruction. It explores pedagogy, didactic difficulties and ethics. The overall aim is to clarify the perception that teachers have of AI chatbots as a useful tool in the writing process and to find out to what degree these perceptions are reflected in instructional decision-making and classroom behaviors. A quantitative study was conducted using a structured questionnaire that was given to forty Arab EFL teachers, using a sequential explanatory mixed-method design. To elaborate and contextualize the survey results, qualitative enquiry was implemented through semi-structured interviews with twelve teachers. Findings have shown that educators tend to perceive AI chatbots as effective complementary instruments to generate ideas, provide linguistic support, and initial feedback. However, there were also major concerns related to the quality of AI-generated feedback, the tendency to overuse automatics by students, the problem of academic integrity, and the lack of uniform preparedness of institutions and technology infrastructure. The results also indicate that teachers use mediated instructional practices, such as guided scaffolding, staged instruction, AI literacy instruction, and verification-based writing activities to alleviate these issues. The current research highlights the fact that successful and conscientious implementation of AI in EFL writing classrooms is largely reliant on teacher mediation and not independent chatbots.
- Research Article
- 10.1080/13562517.2026.2643825
- Mar 13, 2026
- Teaching in Higher Education
- Stefanus Galang Ardana + 1 more
ABSTRACT In response to generative AI, universities are rapidly deploying AI detection tools to uphold academic integrity. However, the pedagogical impact of these flawed systems remains critically under-examined. Drawing on the theories of Gilles Deleuze and Sara Ahmed, this paper analyzes this dynamic as a system of affective control. Employing a qualitative case study of six non-native English-speaking thesis writers in Indonesia, this paper contributes a theoretical reframing of the AI detector as an ‘affective engine of control.’ Findings demonstrate how algorithmic surveillance compels students toward authorial alienation, reinforces linguistic injustice, and facilitates a ‘pedagogical abdication’ by lecturers. We conclude that a focus on policing undermines learning and advocate for a shift to human-centered Critical AI Literacy.
- Research Article
- 10.1002/ail2.70025
- Mar 11, 2026
- Applied AI Letters
- Anthony Bloxham + 3 more
ABSTRACT Generative artificial intelligence (GenAI) raises pressing pedagogical and ethical questions in higher education. We surveyed 87 UK psychology students about GenAI familiarity, study uses, attitudes, and the justification of questionable uses (neutralisation). 54% reported using GenAI to assist their studies, primarily via ChatGPT. Compared with non‐users, study users showed more positive AI attitudes and higher neutralisation scores. Across the full sample, AI attitudes modestly predicted neutralisation. The most common study uses were explaining concepts and generating ideas, and most users intended to use GenAI again. Non‐users were more likely to endorse restrictive views on GenAI in assessed work. Findings point to a tension between perceived learning value and risks of dependency and academic integrity. Students also reported a need for clearer institutional guidance. We recommend a balanced approach that supports responsible use, feedback literacy, and critical engagement with AI outputs, alongside continued student‐centred research to inform policy and assessment design.
- Research Article
- 10.55942/pssj.v6i3.942
- Mar 11, 2026
- Priviet Social Sciences Journal
- Nisfu Istiqomah + 1 more
The development of artificial intelligence (AI) technology has led to significant changes in students learning. The increasing use of application platforms such as ChatGPT, Gemini, and Meta AI has demonstrated a shift in learning habits, which now emphasize speed and replace the reflective and collaborative approaches that are characteristic of traditional learning. This study aims to analyze how the use of AI can create new social habits among students in Indonesia and its impact on social values, morals, and the education system. This study uses a descriptive qualitative approach with a literature review through the analysis of various relevant scientific literature, both national and international. The results show that the repeated use of AI will form a digital habitus that emphasizes efficiency and quick results but weakens students' critical and reflective thinking skills. Furthermore, unequal access to technology deepens educational stratification, while the values of academic honesty and social responsibility are beginning to shift. Therefore, education in the AI era must focus on strengthening ethical digital literacy and the formation of a reflective habitus to ensure that technological development remains aligned with humanitarian values, morality, and academic integrity.
- Research Article
- 10.1016/j.nedt.2026.107073
- Mar 11, 2026
- Nurse education today
- Wesam Taher Almagharbeh + 10 more
Undergraduate nursing students' perceptions of using holopatient in learning and clinical training: An exploratory-descriptive qualitative study.
- Research Article
- 10.1038/s42949-026-00374-5
- Mar 11, 2026
- npj Urban Sustainability
- Santina Contreras + 2 more
Abstract The current U.S. federal administration has sought to intervene into every aspect of academic life, university functioning, and the research enterprise including by attacking academic freedom and integrity and canceling and retreating from publicly funded research. Such actions have profound adverse effects on the U.S. public, especially its most marginalized communities, and on science, itself. This perspective provides a telling example of such impacts through our own experience of funding cancellation, the disruptions it causes and the effects it has on urban systems and the communities they support. By focusing on our project that sought to center environmental justice communities in urban transportation and climate planning we offer insights into the wide-ranging effects of such disinvestment, including on sustainability and air quality efforts, with recommendations for moving forward to advance sustainable, equitable, and resilient cities.
- Research Article
- 10.5171/2025.4629425
- Mar 11, 2026
- Communications of International Proceedings
- Krzysztof Bodzenta
The article discusses selected challenges related to the integration of artificial intelligence (AI) tools into legal education. The motivation for addressing this topic is the dynamic development of AI technologies, which increasingly influence the educational process, combined with the lack of systematic reflection on their consequences in this field. The text is based on a review of the available literature as well as ongoing academic and institutional debates. It first identifies areas in which AI-based systems may support teaching and learning. The discussion then focuses on three main categories of challenges: pedagogical, organizational, and ethical. The analysis covers both the evolving role of the academic teacher in the face of new technologies and the institutional consequences of implementing AI at universities, as well as issues of academic integrity and equal access. The article concludes with recommendations for the sustainable and responsible use of AI in training future lawyers.
- Research Article
- 10.11113/itlj.v10.218
- Mar 10, 2026
- Innovative Teaching and Learning Journal
- Siti Hajar Abd Hamid + 2 more
Academic writing practices in education have changed significantly since the advent of artificial intelligence. AI technologies provide new possibilities for fostering student learning, feedback mechanisms, and student engagement. However, studies of AI supported academic writing are still scattered in the education literature and have yet to be synthesized. Within this context, the present study adopts a bibliometric approach to explore research trends, thematic developments and the evolving knowledge structure of AI for academic writing. English-language journal articles published between 2022 and 2026 were identified through an initial Scopus search, after which 397 articles were retained for analysis. Scopus Analyser was used to identify publication trends and descriptive statistics, OpenRefine to clean and standardise bibliographic data and VOSviewer to visualise author collaboration networks, international research collaborations and keyword co-occurrence patterns. The results show a significant growth in publishing capacity after 2022 in line with the rapid development of generative AI technologies. Publication output and international collaborations are concentrated in China and the United States. Keyword co-occurrence analysis shows that the core thematic clusters include academic writing, formative feedback, metacognitive support, argumentation development and academic integrity. Although research on AI-supported academic writing has grown rapidly, attention to discipline-specific writing contexts is still relatively limited. Overall, the results suggest that AI can be leveraged to enhance academic writing pedagogy through technology integration, streamlined feedback mechanisms and greater student participation. Effective implementation still relies on informed direct instruction, ethics around AI use and the creation of AI toolkits to meet academic writing goals. This paper provides a systematic overview of previous research and evidence-based considerations relevant to future AI supported academic writing research.
- Research Article
- 10.38140/obp4-2026-01
- Mar 10, 2026
- Open Books and Proceedings
- Godwin Pedzisai Dzvapatsva + 4 more
Postgraduate supervision plays a critical role in shaping research outcomes, student development, and the mentor-mentee relationship. However, traditional supervision practices, often characterised by limited flexibility and heavy reliance on supervisors, can constrain student growth. The emergence of GenAI presents new opportunities for personalised guidance, faster communication, and increased student autonomy. This study explores the role of GenAI in transforming mentor-mentee relationships, identifying potential benefits and implications for postgraduate education. Adopting a qualitative approach, this study conducted a PRISMA-guided systematic review of relevant literature across Scopus, Web of Science, IEEE Xplore, ScienceDirect, Springer, and Google Scholar. The findings indicate that GenAI enhances supervision by improving feedback and critical thinking, promoting student autonomy and motivation, and introducing considerations for ethical and academic integrity. Effective implementation of GenAI in postgraduate education requires a balanced approach that leverages technological advancements while preserving the relational and empathetic aspects of mentor-mentee interactions. Overall, this study underscores the need for further research to investigate the long-term effects of GenAI on academic supervision and to establish best practices for integrating AI tools in a manner that enhances, rather than undermines, the mentorship experience. The study relied on secondary data, and future studies should focus on collecting primary data on the role of artificial intelligence in the mentor-mentee relationship.
- Research Article
- 10.38140/obp4-2026-03
- Mar 10, 2026
- Open Books and Proceedings
- Edmore Chinhamo + 2 more
The advent of technology, particularly the rapid advancement of Artificial Intelligence (AI), is posing significant challenges to traditional models of postgraduate student supervision, ranging from affective mentorship relationships to automated interactions. AI-powered tools such as ChatGPT, Grammarly, DeepSeek, and automated data analysis software provide unprecedented data support to students, thereby enhancing and automating routine tasks. Consequently, the role of supervisors in upholding the fundamental principles of mentoring—such as fostering critical thinking, creativity, and ethical inquiry—is being scrutinised in light of this technological shift. This chapter examines the challenges associated with the incorporation of AI into postgraduate supervision, investigating its impact on intellectual independence, academic integrity, and mentor-mentee dynamics. Through a comprehensive Systematic Literature Review, this conceptual paper identifies strategies for balancing AI-driven efficiencies with human-centred mentoring practices. Additionally, we address ethical considerations, power dynamics, and equity issues that arise within AI-mediated supervision. Our contributions suggest that while AI offers transformative potential, it is essential to preserve the human elements of supervision, empathy, intuition, and the capacity to inspire original thought. This chapter contributes to the ongoing conversation on redefining postgraduate supervision in the digital age, providing actionable insights for supervisors navigating the challenges and opportunities presented by AI.
- Research Article
- 10.38140/obp4-2026-04
- Mar 10, 2026
- Open Books and Proceedings
- Winter Sinkala + 1 more
The doctorate has long been regarded as the pinnacle of higher educational attainment, demanding originality, critical inquiry, and the capacity to generate new knowledge—qualities collectively referred to as doctoralness. In the early twenty-first century, doctoral education is undergoing transformation due to the increasing prevalence of artificial intelligence (AI) tools in research design, data analysis, academic writing, and supervisory practices. This chapter examines the intersection of AI with the nature and practice of doctoralness. We begin by clarifying the historical and conceptual foundations of doctoralness as an intellectual and identity-forming endeavour that extends beyond mere technical research skills. Subsequently, we explore the evolving landscape of the PhD as candidates, supervisors, and institutions adopt AI-enabled tools for literature synthesis, multilingual writing support, modelling, and personalised feedback. While these tools promise efficiency, inclusivity, and new modes of collaboration, they also pose risks—such as over-reliance, erosion of critical judgment, breaches of academic integrity, and the widening of inequities between well-resourced and under-resourced contexts. Drawing on global literature and examples from South Africa and the Global South, this chapter discusses strategies for safeguarding doctoralness through supervisor professional development, institutional AI literacy frameworks, and policies grounded in ethical and epistemic justice. We argue that the responsible integration of AI can enrich rather than diminish doctoral education when guided by human criticality and robust scholarly norms. The chapter concludes with recommendations for future directions in AI-infused doctoral training within a digitally mediated knowledge society.
- Research Article
- 10.38140/obp4-2026-07
- Mar 10, 2026
- Open Books and Proceedings
- Peter Babajide Oloba
The integration of artificial intelligence (AI) tools into postgraduate supervision in higher education has accelerated globally, offering opportunities to enhance efficiency in research processes and academic mentoring. However, limited empirical evidence exists regarding the risks and challenges of this integration, particularly within Global South contexts such as South Africa. This study investigates the challenges associated with the use of AI tools in postgraduate supervision from a South African perspective. Anchored in a constructivist paradigm, the study adopts a qualitative research design, employing semi-structured interviews with 20 purposively selected participants—10 postgraduate students and 10 supervisors from faculties that are actively integrating AI into supervisory practices. Data were analysed thematically using qualitative content analysis. The findings identify six key challenges: increasing dependence on AI that may erode students’ critical thinking and originality; insufficient digital literacy and institutional support; financial and sustainability constraints; the questionable reliability and accuracy of AI-generated outputs; ethical dilemmas and limited cultural contextualisation; and resistance to technological change among supervisors. While acknowledging the potential of AI to enhance research productivity and the quality of supervision, the study cautions against its uncritical adoption, which may compromise academic integrity, creativity, and equity. It recommends institutional strategies, including subsidised AI access, structured training on ethical and critical AI use, the embedding of digital literacy in postgraduate curricula, and the fostering of collaboration with AI developers to ensure culturally relevant systems. A context-sensitive approach is essential to balance the affordances of AI with the preservation of human intellectual agency and critical scholarly engagement in postgraduate supervision.
- Research Article
- 10.38140/obp4-2026-11
- Mar 10, 2026
- Open Books and Proceedings
- Gardner Mwansa + 1 more
Higher education has undergone a rapid transformation in recent years, driven by the dual pressures of mitigating the long-term effects of COVID-19 and integrating generative artificial intelligence (GenAI) technologies. The pandemic exposed and exacerbated pre-existing inequalities and power imbalances within the sector, necessitating policy adaptations to address issues such as digital inequality, limited social interaction, barriers faced by student researchers in conducting face-to-face data collection, and the protection of mental health. Concurrently, GenAI has emerged as a disruptive technology that is reshaping pedagogical practices, research processes, and supervisory relationships. Although GenAI is widely promoted as a tool that can enhance teaching, research, administration, and student support, it raises critical concerns related to academic integrity, ethics, systemic bias, knowledge ownership, and uneven regulatory standards. Supervisors similarly hold divergent views regarding its usefulness and risks, a tension also reflected in inconsistent journal policies on GenAI use. Guided by the GenAI–Technological Pedagogical Content Knowledge framework (GenAI-TPACK), this study examined the ethical and literacy imperatives necessary for transforming research supervision in the era of GenAI. A systematic literature review was conducted to identify emerging GenAI literacy indicators that facilitate ethical, transparent, responsible, and informed engagement with GenAI during the research process. The review revealed significant gaps in supervisor preparedness, uneven AI literacy among research candidates, and a lack of coherent institutional guidance. The study contributes practical insights for higher education institutions seeking to balance the opportunities and challenges posed by GenAI and offers direction for developing humanising, context-sensitive guidelines for responsible integration in research supervision.
- Research Article
- 10.38140/obp4-2026-12
- Mar 10, 2026
- Open Books and Proceedings
- Tichaona Chikore + 1 more
The increasing adoption of artificial intelligence (AI) in academic research has reshaped scholarly practices while introducing complex ethical risks, particularly concerning research integrity and academic misconduct. This study proposes a comprehensive quantitative and empirical framework, adapted from the Cobb-Douglas production function, to model how the misuse of AI contributes to systemic quality degradation, using retractions as a proxy for integrity breaches. By leveraging longitudinal publication and retraction data from Retraction Watch and Scopus, we construct an AI misuse impact index to track the relationship between research output and integrity risks over time. Time series lag analysis reveals that retraction rates most strongly correlate with prior publication volumes at a one-year lag, indicating the rapid manifestation of AI-driven misconduct. To identify critical intervention points, we apply piecewise linear modelling to detect thresholds where retraction rates accelerate disproportionately relative to publication growth. A plagiarism tolerance threshold is established, beyond which research quality deteriorates unsustainably. Additionally, we introduce a probabilistic damage model, quantifying the risk of systemic integrity failure as AI adoption expands. Results highlight a pronounced post-2009 rise in AI-related integrity risks, with a sharp inflection in 2023 when misconduct indicators exceeded acceptable tolerance levels, signalling a system-wide ethical crisis. The study further proposes a dynamic, data-driven method for calibrating institutional plagiarism thresholds in alignment with evolving integrity risks and patterns of AI adoption. This model enables proactive monitoring and policy adjustments, linking integrity governance directly to empirical risk indicators. The findings underscore the urgent need for adaptive, transparent AI oversight frameworks within academia, ensuring that AI complements rather than undermines the ethical and intellectual foundations of research. Future research should extend this work by integrating discipline-specific AI use patterns and developing real-time academic integrity monitoring systems.
- Research Article
- 10.38140/obp4-2026-06
- Mar 10, 2026
- Open Books and Proceedings
- Thembi Busisiwe Nkosi
The use of AI by postgraduate students quickly changes supervisory relationships and requires new supervisory skills. This study examines the fundamental supervisory abilities needed to manage postgraduate students who integrate AI tools into their research work. It employs a qualitative research method based on exploratory phenomenology within an interpretive research paradigm to investigate supervisors’ subjective experiences and perspectives in AI-integrated supervision environments. Ten purposively selected supervisors with experience in AI-enhanced settings provided data through semi-structured interviews. An analysis of the interview transcripts using thematic methods revealed consistent patterns and themes regarding supervisory competencies. Supervisors need to cultivate critical evaluation skills to identify students’ overdependence on AI systems and learn how to detect AI-generated material that lacks originality by interpreting underlying meanings. Students require guidance from supervisors in learning essential research techniques, such as literature searching and correct source attribution, to uphold academic integrity. The study emphasises the importance of supervisors mandating students to record their research steps and participate in evaluative discussions to test their understanding and ethical use of AI. Supervisory responsibilities must incorporate AI tools while simultaneously promoting independent critical thought and ethical principles. The study proposes specialised training programmes for supervisors to enhance their AI literacy and evaluation skills while also creating clear ethical guidelines for AI use in postgraduate research. Future research should investigate how AI integration affects supervisory relationships over time and develop scalable supervisor training frameworks suitable for various academic fields and institutional settings.
- Research Article
- 10.55942/pssj.v6i3.1188
- Mar 10, 2026
- Priviet Social Sciences Journal
- Araf Aliwijaya + 6 more
This study aims to identify repository access policies in university libraries in the Special Region of Yogyakarta. A qualitative approach was used through in-depth interviews. Eight informants from four selected libraries, as well as observation and review of repository websites. The results show a wide variety of policies ranging from full open access, limited access per chapter, access only for institutional members through Single Sign-On, to access restrictions only through library computers, as well as take-down and embargo practices. The findings report a general chronological pattern, namely initialization with openness, then gradually restricted due to concerns about plagiarism, protection of sensitive data, administrative burdens, resource limitations, and encouragement from internal actors such as lecturers. The discussion emphasizes that the reality of policy in the field is more complex than the typology in the literature because it is simultaneously influenced by technical, normative, and administrative factors. This research is expected to serve as a guideline for contextual and consistent repository policies, strengthening technical and managerial capacity, copyright policies, and communication strategies to increase researcher participation. Recommendations include the development of integrated embargo and authentication mechanisms to balance open access and the protection of academic integrity.
- Research Article
- 10.36311/1981-1640.2026.v20.e026006
- Mar 9, 2026
- Brazilian Journal of Information Science: research trends
- Nivaldo Calixto Ribeiro + 2 more
The general objective of this article is to analyze the perceptions of graduate students, masters, and doctors regarding the availability of open access research data. The specific objectives are: (1) To identify the level of knowledge of researchers about open research data; (2) To analyze perceptions about the benefits and barriers to open access data sharing; (3) To verify the adoption of data management practices and knowledge about data repositories; and (4) To assess researchers' willingness to make their research data available in open access. An exploratory-descriptive study was conducted using intentional sampling through the “snowball” method. This method allows reaching a network of interconnected participants, expanding the scope of the sampling. The data collection instrument was a structured questionnaire, developed on the Google Forms platform, with closed questions in a Likert scale format. It was found that most researchers are familiar with the concept of open research data, cited in the Open Science Taxonomy, indicating a growing awareness of the importance of transparency and accessibility of scientific data. In addition, there is significant interest in making data available in open access, seen as a practice that can strengthen the advancement of knowledge and academic integrity. However, challenges such as the protection of sensitive data, intellectual property issues, and the lack of infrastructure for data management and sharing were identified, highlighting the need for institutional policies and technical support for the safe and effective adoption of open data practices. It was concluded that researchers' perceptions reflect both the opportunities and challenges in implementing open access, contributing to a deeper understanding of this issue in the context of science.
- Research Article
- 10.24260/ngaji.v5i2.121
- Mar 9, 2026
- Ngaji: Jurnal Pendidikan Islam
- Husniyatul Ariibah + 1 more
Education in the modern era has undergone a significant transformation along with the advancement of digital technology. Digital technologies that are currently popularly applied in the field of education are ChatGPT and AI (Artificial Intelligence) applications as interactive and responsive learning assistants. This study aims to examine how the use of ChatGPT and AI applications is used wisely as learning assistants. The research employs a qualitative, literature-based approach and is analyzed through content analysis. The data of this research was taken from various literature sources such as scientific articles, books, and other relevant references. The results of this study show that the application of ChatGPT and AI in Islamic Religious Education (PAI) can improve the quality and effectiveness of learning through blended learning models and artificial intelligence applications through features such as visual mentors, voice assistants, and translator presenters. However, during implementation, several challenges remain, including limitations in digital infrastructure, low digital literacy among teachers, and concerns about the accuracy and authenticity of religious content. Wise strategies need to be carried out to improve digital literacy, verify information generated by AI, instill ethics, and conduct supervision. ChatGPT and AI applications have a positive impact, such as increasing students' motivation and interest in learning. But it also has negative impacts such as technology dependence, lowering academic integrity, and plagiarism. Therefore, the use of ChatGPT and AI applications needs to be done carefully and responsibly.
- Research Article
- 10.38124/ijsrmt.v5i2.1232
- Mar 7, 2026
- International Journal of Scientific Research and Modern Technology
- Yuri Arsénio De Matos
Background: The rapid expansion of Artificial Intelligence (AI) in higher education has reshaped teaching, assessment, and academic writing practices, exposing the limitations of traditional plagiarism detection models. At the at the same time, the emergence of advanced algorithmic systems and generative AI has intensified ethical, pedagogical, and institutional debates concerning authorship, academic integrity, and fair assessment. Objective: To critically analyze the impact of Artificial Intelligence on plagiarism detection in higher education, considering its technical, ethical, and pedagogical implications. Method: A systematic literature review was conducted following the PRISMA protocol, using international databases including Scopus, Web of Science, ERIC, IEEE Xplore, and Google Scholar. A total of 963 records were identified, of which 18 studies met the eligibility criteria and were included in the final analysis. Results: The findings reveal growing reliance on AI- based plagiarism detection systems, alongside persistent technical limitations such as overreliance on similarity scores, algorithmic bias, false positives, and reduced effectiveness in identifying AI- generated texts. The results also highlight significant effects on teaching and assessment practices, particularly when automated outputs are applied without adequate pedagogical mediation. Conclusions: Artificial Intelligence reshapes plagiarism detection practices but does not replace contextualized human judgment. Its responsible use requires clear institutional policies, strengthened academic and ethical literacy, and pedagogical approaches that prioritize formative processes over punitive measures, particularly within Global South contexts.