Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • New
  • Open Access Icon
  • Research Article
  • 10.1186/s41239-025-00570-w
Autonomy versus algorithm: a replication study of student perspectives on AI ethical boundaries
  • Dec 1, 2025
  • International Journal of Educational Technology in Higher Education
  • Aminu Muhammad Auwal

  • New
  • Open Access Icon
  • Research Article
  • 10.1186/s41239-025-00568-4
Can synthetic avatars replace lecturers? An exploratory international study of higher education stakeholder perceptions
  • Nov 28, 2025
  • International Journal of Educational Technology in Higher Education
  • Jasper Roe + 4 more

Abstract Advances in technologies which use Generative Artificial Intelligence (GenAI) to mimic a person’s likeness or voice have led to growing interest in their use in educational contexts. However, little is known about how key stakeholders (teaching faculty and professional staff) perceive and intend to use these tools. This study investigates higher education employees’ perceptions and intentions regarding the use of synthetic avatars (alternatively known as deepfakes) through the lens of the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2). Using a mixed-methods approach that combined quantitative survey data ( n = 173) with qualitative text response, we found that academic stakeholders demonstrated a relatively low intention to adopt these technologies ( M = 41.55, SD = 34.14) and held complex, often contradictory views about their implementation. Stakeholders identified potential benefits, including enhanced student engagement through interactions with historical figures, improved accessibility through voice synthesis, and reduced workload in content creation. However, they expressed significant concerns about the exploitation of academic labour, institutional cost-cutting leading to automation, degradation of human relationships in education, and broader societal impacts, such as environmental costs and information validity. Quantitative analysis revealed that adoption intentions were most strongly associated with hedonic motivation, with a gender-specific interaction in the evaluation of price value. Qualitative findings highlighted significant concerns regarding ethical implications, resource inequities, and the impact on professional identity. These results suggest that traditional technology acceptance models should be expanded to consider broader ethical and structural factors. Based on these findings, we propose a three-pillar framework for implementing synthetic avatar technologies in higher education that emphasises establishing robust institutional policies and governance structures, developing comprehensive professional development and support systems, and ensuring equitable resource allocation guided by evidence-based implementation strategies. This study enhances our understanding of how emerging AI technologies can be thoughtfully integrated into higher education while maintaining academic integrity and professional autonomy of educators.

  • New
  • Open Access Icon
  • Research Article
  • 10.1186/s41239-025-00569-3
Effects of two scenario approaches for digital sobriety education among higher education students
  • Nov 24, 2025
  • International Journal of Educational Technology in Higher Education
  • Sarah Descamps + 2 more

Abstract In the context of the growing impact of digital technology, this study explores the effectiveness of a gamified learning tool designed to educate higher education students about digital sobriety. The aim is to analyse the effects of two different learning scenarios on digital sobriety maturity, motivation to adopt responsible digital behaviour and the feeling of competence to act collectively. In an experimental approach, 107 students took part in a game-based learning experience (escape game) followed by the drafting of digital eco-gesture charters. One group was asked to take individual action, while the other was asked to take collective action. The results show that both scenarios improve digital maturity, with no significant difference between the two. However, the collective scenario reinforces the feeling of competence to act collectively. Finally, regardless of the scenario, the students appear to be motivated by intrinsic and identified reasons, underlining their awareness.

  • New
  • Open Access Icon
  • Research Article
  • 10.1186/s41239-025-00565-7
How does peer assessment support students’ self-regulation? A case study in online education
  • Nov 12, 2025
  • International Journal of Educational Technology in Higher Education
  • Maite Fernández-Ferrer + 3 more

Abstract The importance of self-regulation as an essential element for lifelong learning calls for the design of learning processes that promote it. In this context, peer assessment, characterised by promoting metacognitive reflection and guiding students in modifying their learning strategies during the process, is considered a key and effective element for its development. This research studies the effects of implementing peer feedback strategies on the development of the competence of learning to learn. The aim is to improve self-regulation in Spanish higher education students, specifically Master’s students in an online learning environment at an open university. The main objective of this contribution is to determine if the active involvement of students (111) in peer assessment (in the role of assessor or assessed) is confirmed as an effective self-regulation strategy. To do this, a self-regulation questionnaire was administered at the beginning and end of the experience, as well as a satisfaction questionnaire regarding the peer assessment experience. The results highlight that through the implemented peer feedback strategies, students specifically improved their capacity to deeply analyse tasks and clearly visualise objectives, which are elements related to the initial planning phase of self-regulation. The conclusions point to the need for students to take responsibility for self-regulation, and the opportunity that technology can provide in supporting student self-regulation, motivation, and participation in assessment in online learning environments.

  • Open Access Icon
  • Research Article
  • 10.1186/s41239-025-00564-8
A person-centered perspective in assessing college students′ self-regulated learning in an online learning environment: potential profiles, antecedents, and outcomes
  • Nov 7, 2025
  • International Journal of Educational Technology in Higher Education
  • Yafei Shi + 5 more

  • Open Access Icon
  • Research Article
  • 10.1186/s41239-025-00560-y
Online project-based learning to foster students’ course choices in data science: a longitudinal case study using Sankey visualization
  • Oct 20, 2025
  • International Journal of Educational Technology in Higher Education
  • Daniela Castellanos-Reyes + 1 more

Abstract Career choices are shaped by students’ experiences, knowledge, and skill sets across time, reflecting not only disciplinary interests but also exposure to evolving fields such as data science (DSC). Despite a surge in interest and enrollment in data science degrees, the United States faces a growing demand for data literacy across multiple sectors. Online learning environments have become entry points for students’ initial engagement with DSC, offering accessibility and supporting workforce needs. Nevertheless, the interdisciplinary essence of DSC means that clear career paths remain ambiguous, especially for those applying DSC knowledge within various disciplines. While national data sources provide valuable overviews of degree distributions, more granular analysis at the course level is warranted to understand nuanced student trajectories. Project-based online learning, though proven valuable in in-person settings, remains underexplored in online DSC education. This study employs curriculum analytics and Sankey diagram visualizations to investigate course enrollment patterns and career trajectories among students after enrolling in an introductory online project-based DSC course. We built a longitudinal dataset by following 35 students between Fall 2022 and Spring 2024, tracking their subsequent course enrollments over time. Demographic and academic data were sourced from institutional enrollment records, allowing subgroup analysis based on major, gender, race, first-generation status, and achievement. Our exploratory analysis reveals patterns indicating that continued DSC course enrollment appears prevalent among nonwhite, male, STEM-major, and academically proficient students, whereas first-generation students exhibit no persistence. We illustrate how Sankey diagrams, though not establishing causality, provide actionable insights for program and curriculum development in DSC education.

  • Open Access Icon
  • Research Article
  • 10.1186/s41239-025-00558-6
The value of GenAI for peer feedback provision: student perceptions and impacts
  • Oct 15, 2025
  • International Journal of Educational Technology in Higher Education
  • Omid Noroozi + 5 more

Abstract Generative Artificial Intelligence (GenAI) has sparked a global debate on its potential as a feedback source for students, yet research in this area remains limited. This study explores students’ use of GenAI during peer feedback provision. Fifty-four graduate students enrolled in a master’s course in the food science domain at a Dutch university received instruction on the effective and ethical use of GenAI. They then wrote an argumentative essay, provided feedback to peers, and revised their essays. Finally, students completed an online questionnaire regarding their perceptions and use of GenAI for peer feedback provision. Descriptive analyses were applied to survey data and comment data were coded quantitatively for the presence of comment features. The results revealed that just over half of the students chose not to use GenAI for peer feedback provision, primarily because they believed they would learn more by completing the task independently. The remaining students used GenAI to improve both high-level and low-level aspects of their feedback, and most of these students found GenAI to be moderately helpful for peer feedback provision. In terms of its impact on the peer feedback content, students who used GenAI provided more suggestions for high-level issues and offered less mitigating praise for low-level issues compared to those who did not use GenAI for peer feedback provision. These results offer valuable insights for the design and adoption of GenAI tools to enhance peer feedback practices.

  • Open Access Icon
  • Research Article
  • 10.1186/s41239-025-00555-9
Student reactions to AI versus human feedback in teamwork skills assessment
  • Oct 10, 2025
  • International Journal of Educational Technology in Higher Education
  • Igor Kotlyar + 1 more

Abstract As AI technologies become increasingly integrated into education, this research investigates how students react to AI-generated versus human feedback in teamwork skills assessment. In Study 1, 108 students completed a virtual teamwork simulation and received assessment feedback framed as either AI- or human-generated. Students showed a clear preference for human feedback over AI feedback, revealing a bias against machine-generated evaluations. Study 2, a scenario-based experiment involving 322 students, confirmed these findings and tested whether enhancing AI feedback with credibility and empathy cues could improve perceptions. These enhancements significantly improved reactions to AI feedback, and when both credibility and empathy were emphasized, reactions approached those for unenhanced human feedback. However, even with enhancements, AI feedback did not fully match the positive perceptions of human feedback. These findings highlight the need for thoughtful design to mitigate biases against AI feedback and suggest that blending AI and human feedback offers an effective approach for improving acceptance and engagement in educational contexts.

  • Open Access Icon
  • Research Article
  • 10.1186/s41239-025-00556-8
What students really think: unpacking AI ethics in educational assessments through a triadic framework
  • Oct 6, 2025
  • International Journal of Educational Technology in Higher Education
  • Tristan Lim + 2 more

Abstract The rise of AI in educational assessments has significantly enhanced efficiency and accuracy. However, it also introduces critical ethical challenges, including bias in grading, data privacy risks, and accountability gaps. These issues can undermine trust in AI-driven assessments and compromise educational fairness, making a structured ethical framework essential. To address these challenges, this study empirically validates an existing triadic ethical framework for AI-assisted educational assessments, originally proposed by Lim, Gottipati and Cheong (In: Keengwe (ed) Creative AI tools and ethical implications in teaching and learning, IGI Global, 2023), grounded in student perceptions. The framework encompasses three ethical domains—physical, cognitive, and informational—which intersect with five key assessment pipeline stages: system design, data stewardship, assessment construction, administration, and grading. By structuring AI-driven assessments within this ethical framework, the study systematically maps key concerns, including fairness, accountability, privacy, and academic integrity. To validate the proposed framework, Structural Equation Modeling (SEM) was employed to examine its relevance and alignment with learners' ethical concerns. Specifically, the study aims to (1) evaluate how well the triadic framework aligns with learners' perceptions of ethical issues using SEM analysis, and (2) examine relationships among the assessment pipeline stages, ethical considerations, pedagogical outcomes, and learner experiences. Findings reveal robust connections between AI-assisted assessment stages, ethical concerns, and learners' perspectives. By bridging theoretical validation with practical insights, this study emphasizes actionable strategies to support the development of AI-driven assessment systems that balance technological efficiency, pedagogical effectiveness, and ethical responsibility.

  • Open Access Icon
  • Research Article
  • 10.1186/s41239-025-00557-7
University staff and student perspectives on competent and ethical use of AI: uncovering similarities and divergences
  • Oct 1, 2025
  • International Journal of Educational Technology in Higher Education
  • Manoj Ravi + 4 more

Abstract We investigated the similarities and differences in understanding among UK-based university staff and students regarding AI literacy, in terms of competent and ethical use of AI tools. This study builds on existing research revealing both wide use of AI tools in higher education, but also a lack of shared understanding among stakeholder groups on what constitutes competent and ethical use of AI. This study is one of the first to combine insights from staff and students, illustrating specific concerns over AI competence and ethical implications in granular detail. The results reveal a significant disparity in the use of AI tools between students and staff, particularly in the adoption of text-based or conversational GenAI tools (cGenAI). Students reported extensive use of cGenAI tools for a range of tasks, while staff engagement was generally limited to brainstorming ideas or generating teaching tasks. Although the use of cGenAI is seen by most as AI competence, nuanced differences emerged between staff and student opinion depending on the application of the AI tool. Ethical issues in both groups were prominent, although staff reported more negative systemic concerns regarding inherent bias, concerns over transparency and data ownership. Over 90% of staff flagged the use of cGenAI for essay-generation as problematic, compared to 58% of students, primarily due to concerns regarding academic integrity. These differences point to the need for institutional guidelines and dialogue to address ethical concerns and align expectations across stakeholder groups to ensure the effective integration of AI literacy in higher education.