Formación Docente en IA Generativa: Impacto Ético y Retos en Educación Superior

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Generative Artificial Intelligence (GAI) is reshaping higher education, and transforming instructional and assessment practices, therefore, educators must develop technical expertise and pedagogical awareness to ensure ethical and responsible use. This study evaluates the impact of an 80-hour GAI training program conducted with 299 lecturers from eight Ecuadorian universities, aiming to enhance their digital skills and openness to AI-based teaching strategies. Through a quasi-experimental design with pretest and posttest assessments, findings reveal an increase in technical proficiency (M = 2.62 to 4.22, t = -30.77, p < 0.0001, d = 0.85) and lecturers’ willingness to apply GAI in their teaching (M = 3.63 to 4.02, t = -6.38, p < 0.0001, d = 0.52). However, perceptions of AI-generated content originality remained unchanged perceptions (M = 3.02 to 2.94, t = -0.82, p = 0.41), indicating ongoing concerns regarding authenticity in academic settings. These results emphasize the necessity of training programs that merge technical instruction with active learning methodologies, such as project-based learning and formative assessment. Additionally, higher education institutions should establish clear policies regulating AI implementation, ensuring ethical standards and academic integrity. Moreover, developing institutional guidelines for assessing AI-generated content is essential for maintaining transparency, fairness, and responsible adoption in teaching and assessment, in order to identify the best practices to support lecturers’ development, and promote its effective use in academic fields.

Similar Papers
  • Research Article
  • Cite Count Icon 9
  • 10.14742/ajet.9434
The AI Assessment Scale (AIAS) in action: A pilot implementation of GenAI-supported assessment
  • Oct 16, 2024
  • Australasian Journal of Educational Technology
  • Leon Furze + 3 more

The rapid adoption of generative artificial intelligence (GenAI) technologies in higher education has raised concerns about academic integrity, assessment practices and student learning. Banning or blocking GenAI tools has proven ineffective, and punitive approaches ignore the potential benefits of these technologies. As a result, assessment reform has become a pressing topic in the GenAI era. This paper presents the findings of a pilot study conducted at British University Vietnam exploring the implementation of the Artificial Intelligence Assessment Scale (AIAS), a flexible framework for incorporating GenAI into educational assessments. The AIAS consists of five levels, ranging from “no AI” to “full AI,” enabling educators to design assessments that focus on areas requiring human input and critical thinking. The pilot study results indicate a significant reduction in academic misconduct cases related to GenAI and enhanced student engagement with GenAI technology. The AIAS facilitated a shift in pedagogical practices, with faculty members incorporating GenAI tools into their modules and students producing innovative multimodal submissions. The findings suggest that the AIAS can support the effective integration of GenAI in higher education, promoting academic integrity while leveraging technology’s potential to enhance learning experiences. Implications for practice or policy: Higher education institutions should adopt flexible frameworks like the AIAS to guide ethical integration of GenAI into assessment practices. Educators should design assessments that leverage GenAI capabilities, while supporting critical thinking and human input. Institutional policies related to GenAI should be developed in consultation with stakeholders and regularly updated to keep pace with technological advancements. Policymakers should prioritise research funding into the impacts of GenAI on higher education to inform evidence-based practices.

  • Research Article
  • Cite Count Icon 177
  • 10.1016/j.caeo.2023.100151
Generative AI tools and assessment: Guidelines of the world's top-ranking universities
  • Oct 23, 2023
  • Computers and Education Open
  • Benjamin Luke Moorhouse + 2 more

Generative AI tools and assessment: Guidelines of the world's top-ranking universities

  • Research Article
  • Cite Count Icon 1
  • 10.33271/nvngu/2025-2/181
Dialogue with generative artificial intelligence: is its “product” free from academic integrity violations?
  • Apr 30, 2025
  • Naukovyi Visnyk Natsionalnoho Hirnychoho Universytetu
  • A Artyukhov + 4 more

Purpose. This article aims to analyze the role of generative artificial intelligence (GenAI), specifically ChatGPT, in educational activities while addressing concerns regarding academic integrity. The study explores the ambiguous boundaries of GenAI’s involvement in coursework, its potential ethical and technological challenges, and the need for clear policies regulating its use in education. Methodology. This study employs a mixed-methods approach, combining bibliometric analysis, direct interaction with ChatGPT, and a survey of Ukrainian students. Findings. The findings of this study reveal several key insights into the use of GenAI, specifically ChatGPT, in educational settings and its impact on academic integrity. The findings underscore the need for educational institutions to develop and implement policies that regulate GenAI’s role in academic activities. While GenAI offers significant potential as a technological assistant, there are risks associated with its misuse, particularly concerning academic dishonesty and the erosion of academic standards. Originality. The study’s originality lies in the comprehensive analysis of the problem of integrating GenAI, in particular ChatGPT, into the educational process from the point of view of academic integrity. For the first time, a systematic view of the stages of user interaction with GenAI has been proposed, potential points of violation of academic integrity at each of these stages are identified, and a “white box” concept has been developed to describe the use of GenAI, which allows controlling input and output parameters, minimizing risks. In addition, the study contains empirical data obtained as a result of a large-scale survey of Ukrainian students on their attitude to the use of GenAI in education, the level of awareness of university policies regarding GenAI, and support for the use of GenAI provided that academic integrity is observed. This outcome allows identifying the gap between existing practices and the need to develop effective strategies for integrating GenAI into the educational process. Practical value. The practical value of the work lies in the fact that the study’s results can serve as the basis for the development of clear recommendations and policies on using GenAI in higher education institutions. The proposed “white box” model can be applied to create practical tools that will help students and teachers understand the potential risks and consequences of using GenAI and develop skills for responsible use of these technologies. The student survey results can be used to inform and ensure dialogue between stakeholders on the optimal ways of integrating GenAI into the educational space, taking into account ethical aspects and the need to maintain academic integrity.

  • Research Article
  • Cite Count Icon 15
  • 10.1016/j.ijme.2024.101041
Enhancing academic integrity among students in GenAI Era:A holistic framework
  • Aug 14, 2024
  • The International Journal of Management Education
  • Tareq Rasul + 11 more

Enhancing academic integrity among students in GenAI Era:A holistic framework

  • Research Article
  • 10.1080/10494820.2025.2611124
Generative artificial intelligence in assessment: a missing discourse on integrity, originality, and validity
  • Jan 6, 2026
  • Interactive Learning Environments
  • Som Nath Ghimire + 2 more

The increasing use of generative artificial intelligence (GenAI) technologies by students in assessment practices has drawn considerable attention in higher education (HE). This paper examines how HE students in Nepal perceive and experience GenAI use in assessment practices and how they associate its impacts with academic integrity, originality of student work, and assessment validity. Based on a longitudinal design, this study employed semi-structured interviews to generate the overall study data. Our findings showed that, while Nepali HE students appreciated the wide-ranging capabilities of GenAI, their justification of self-assumed frameworks and denial to acknowledge GenAI use in their work produced more concerning findings. In particular, as these students’ conceptions of GenAI emerged, how they subjectively drew ethical frameworks for GenAI use and their false impressions that text prompts and modifications of GenAI’s resulting outputs could retain the creative and intellectual values of their work threatened traditional notions of academic integrity and originality. Further, how they overestimated GenAI capabilities for reducing cognitive loads undermined the core visions of HE assessment systems. Our findings contribute to the practical understanding that urgent policy interventions and GenAI literacy programmes are required to motivate thoughtful, responsible, and transparent GenAI use for effective assessment practices in HE.

  • Research Article
  • 10.37074/jalt.2025.8.2.16
Perceived influence of GenAI on student engagement in online higher education
  • Sep 16, 2025
  • Journal of Applied Learning & Teaching
  • Mamun Ala + 4 more

This paper analyses the perceived influence of Generative Artificial Intelligence (GenAI) on student engagement in online higher education using the Self-Determination Theory (SDT) framework. Drawing on qualitative data from 27 experienced academics across the Australian tertiary sector, the study investigates the perspectives of online educators on how GenAI may influence three core psychological needs that are considered central to student engagement: autonomy, competence, and relatedness. The findings reveal that GenAI can enhance student autonomy through personalised learning opportunities, improve competence through real-time feedback and writing support, and support relatedness by enabling inclusive participation for linguistically diverse learners. Nevertheless, the study also identifies key risks, including over-reliance on GenAI, diminished critical thinking, reduced interaction with peers and instructors, reduced collaboration, and concerns around academic integrity. The paper argues that to harness GenAI’s pedagogical potential, higher education institutions must integrate GenAI literacy, student-centred instructional design, and actionable ethical frameworks. With such measures in place, GenAI can evolve from an emerging tool into a major driver of engagement, inclusion, and transformational learning in higher education.

  • Research Article
  • 10.53797/ujssh.v4i1.30.2025
Articulating Inclusion of Generative Artificial Intelligence in Higher Education
  • Mar 3, 2025
  • Uniglobal Journal of Social Sciences and Humanities

The inclusion of Generative Artificial Intelligence (GAI) in higher education is revolutionizing teaching, learning, and research processes, presenting new opportunities and challenges to institutions worldwide. This paper explores the multidimensional inclusion of GAI in transforming higher education, with an emphasis on its applications in content development, individualized learning, and academic support systems. By utilizing algorithms capable of producing creative outputs such as text, images, and simulations, GAI enables the automation of administrative processes, increasing efficiency while promoting personalized learning experiences. This paper also looks at how GAI is utilized to enhance traditional pedagogical frameworks, giving educators new tools for curriculum creation and assessment. However, in addition to its potential benefits, GAI inclusion raises important ethical, pedagogical, and technological challenges, such as data privacy, academic integrity, and the digital divide. This paper examines the growing significance of GAI with a review of existing literature, case studies, and expert perspectives, highlighting its potential to alter educational practices while advocating appropriate applications. The findings are intended to provide an exhaustive framework for policymakers, educators, and technology developers to guide the effective and ethical integration of GAI into higher education institutions. Finally, this paper contributes to the discussion of how GAI might improve academic experiences and prepare future generations for a fast-changing technological landscape.

  • Research Article
  • Cite Count Icon 1
  • 10.3390/educsci15040501
Perceptions of Generative AI Tools in Higher Education: Insights from Students and Academics at Sultan Qaboos University
  • Apr 16, 2025
  • Education Sciences
  • Alsaeed Alshamy + 2 more

This study investigates the perceptions of generative artificial intelligence (GenAI) tools, such as ChatGPT, among students and academics at Sultan Qaboos University (SQU) within the context of higher education in Oman. Using the Technology Acceptance Model (TAM), it explores five key dimensions: actual use (AU), ease of use (EU), perceived usefulness (PU), perceived challenges (PC), and intention to use (IU). Data collected from 555 students and 168 academics provide valuable insights into the opportunities and challenges associated with the adoption of GenAI tools, based on the results of a t-test. The findings reveal notable differences between students and academics regarding their perceptions of GenAI tools across all TAM variables. Students report frequent use of GenAI for academic support, including personalized learning, brainstorming, and completing assignments, while academics highlight its role in developing learning materials, assessments, lesson plans, and customizing learning content. Both groups recognize its potential to enhance efficiency and innovation in academic practices. However, concerns arise regarding over-reliance on GenAI, diminished critical thinking and creativity, and academic integrity risks. Academics consistently express greater concerns about these challenges than students, particularly regarding plagiarism, academic misconduct, and the potential for over-reliance on GenAI. Despite these challenges, the majority of students and academics indicate a willingness to continue using GenAI tools. This contrast underscores the need for tailored interventions to address the distinct concerns of students and academics. These findings highlight the need for regulatory frameworks, comprehensive institutional guidelines, and targeted training programs to ensure the ethical and responsible use of GenAI technologies. By addressing these critical areas, higher education institutions in Oman can leverage the potential of GenAI while safeguarding academic integrity and fostering essential skills such as critical thinking and creativity.

  • Research Article
  • Cite Count Icon 29
  • 10.3389/bjbs.2024.14048
Generative AI in Higher Education: Balancing Innovation and Integrity.
  • Jan 9, 2025
  • British journal of biomedical science
  • Nigel J Francis + 2 more

Generative Artificial Intelligence (GenAI) is rapidly transforming the landscape of higher education, offering novel opportunities for personalised learning and innovative assessment methods. This paper explores the dual-edged nature of GenAI's integration into educational practices, focusing on both its potential to enhance student engagement and learning outcomes and the significant challenges it poses to academic integrity and equity. Through a comprehensive review of current literature, we examine the implications of GenAI on assessment practices, highlighting the need for robust ethical frameworks to guide its use. Our analysis is framed within pedagogical theories, including social constructivism and competency-based learning, highlighting the importance of balancing human expertise and AI capabilities. We also address broader ethical concerns associated with GenAI, such as the risks of bias, the digital divide, and the environmental impact of AI technologies. This paper argues that while GenAI can provide substantial benefits in terms of automation and efficiency, its integration must be managed with care to avoid undermining the authenticity of student work and exacerbating existing inequalities. Finally, we propose a set of recommendations for educational institutions, including developing GenAI literacy programmes, revising assessment designs to incorporate critical thinking and creativity, and establishing transparent policies that ensure fairness and accountability in GenAI use. By fostering a responsible approach to GenAI, higher education can harness its potential while safeguarding the core values of academic integrity and inclusive education.

  • Research Article
  • 10.53761/fh6q4v89
Embedding Generative AI as a digital capability into a year-long skills program.
  • Jun 17, 2025
  • Journal of University Teaching and Learning Practice
  • David Smith + 6 more

The arrival of Generative Artificial Intelligence (GenAI) into higher education has significantly transformed assessment practices and pedagogical approaches. Large Language Models (LLMs) powered by GenAI present unprecedented opportunities for personalised learning journeys. However, the emergence of GenAI in higher education raises concerns regarding academic integrity and the development of essential cognitive and creative skills among students. Critics worry about the potential decline in academic standards and the perpetuation of biases inherent in the training sets used for LLMs. Addressing these concerns requires clear frameworks and continual evaluation and updating of assessment practices to leverage GenAI's capabilities while preserving academic integrity. Here, we evaluated the integration of GenAI into a year-long MSc program to enhance student understanding and confidence in using GenAI. Approaching GenAI as a digital competency, its use was integrated into core skills modules across two semesters, focusing on ethical considerations, prompt engineering, and tool usage. The assessment tasks were redesigned to incorporate GenAI, which takes a process-based assessment approach. Students' perceptions were evaluated alongside skills audits, and they reported increased confidence in using GenAI. Thematic analysis of one-to-one interviews revealed a cyclical relationship between students' usage of GenAI, experience, ethical considerations, and learning adaptation.

  • Research Article
  • Cite Count Icon 1
  • 10.1007/s40979-025-00195-6
From policy to practice: the regulation and implementation of generative AI in Swedish higher education institutes
  • Jul 21, 2025
  • International Journal for Educational Integrity
  • Charlotte Erhardt + 5 more

Background The rapid development of generative artificial intelligence (GenAI) is reshaping higher education by offering innovative solutions in course design, assessment, and learning experiences. Despite its potential, GenAI integration poses ethical, pedagogical, and practical challenges, but also a risk of academic misconduct. This study explores how Swedish higher education institutions (HEIs) are addressing GenAI through guidelines, policy documents, and public website information. Methods A qualitative manifest content analysis for objectivity and consistency was conducted on GenAI-related documents and website information from Swedish HEIs. Forty-nine institutions were contacted, with 36 providing relevant data. Data collection involved email correspondence and systematic searches on public websites. Results Few formal GenAI guidelines exist across Swedish HEIs. Independent institutions were more likely to have established guidelines for both staff and students, whereas universities or university colleges often provided more GenAI-related information on their websites. Five categories were identified: Good academic practice; GenAI use and governance in education; Information governance; Ethical and social impact; and GenAI essentials, the latter unique to websites. Good academic practice was the most emphasized, focusing on transparency, responsibility, and the challenges of GenAI-related misconduct. Conclusions Taken together, GenAI integration in higher education remains early and uneven, with some institutions implementing formal guidelines while others are still developing policies. This inconsistency calls for national directives to balance GenAI´s benefits with ethical concerns, promote GenAI literacy, and ensure equitable access. Rapid technological change challenges HEIs to update policies that ensure academic integrity and fairness. Future research should foster collaborative policy development among HEIs, policymakers, and technology providers.

  • Research Article
  • 10.3389/fpos.2025.1666661
Generative AI governance in higher education: a case study from Africa
  • Dec 9, 2025
  • Frontiers in Political Science
  • Donrich Thaldar + 12 more

Introduction The rapid rise of generative artificial intelligence (Gen-AI), particularly large language models (LLMs), is reshaping the higher education landscape. Yet, there is limited empirical documentation of how African universities are integrating Gen-AI into teaching, learning, and research. This study presents a case study of the University of KwaZulu-Natal (UKZN), one of the first African institutions to develop and implement comprehensive academic guidelines for the responsible use of Gen-AI, aligned with national policy priorities and global debates on academic integrity, transparency, and innovation. Methods Adopting a qualitative, single-institution case study design, this research draws on process tracing, comparative policy analysis, institutional records, and the authors’ direct involvement as members of the AI Task Team. The guideline development process was documented and analysed, from inception and internal deliberation to external peer review, institutional consultation, and final adoption. Results The resulting UKZN AI Academic Guidelines are based on four foundational principles: encouraging innovation, ensuring ethical and responsible use, maintaining academic rigour, and building institutional capacity. They establish clear policies on Gen-AI adoption across teaching and research, including curriculum integration, standards for disclosure and authorship, approaches to plagiarism, and guidance on data protection. The guidelines also provide a tiered disclosure framework and embed capacity-building initiatives to support AI literacy among staff and students. Discussion This case study demonstrates how a higher education institution in the Global South can translate national AI policy into actionable institutional governance while addressing contextual challenges such as resource constraints, digital divides, and multicultural considerations. By framing Gen-AI as an enabling tool rather than a threat, the UKZN model offers a replicable pathway for other African and Global South universities seeking to integrate AI responsibly, enhance academic productivity, and prepare graduates for an AI-driven future.

  • Research Article
  • 10.47408/jldhe.vi32.1426
Building tomorrow's learning landscape through AI integration and development in higher education
  • Oct 31, 2024
  • Journal of Learning Development in Higher Education
  • Shivani Wilson-Rochford + 1 more

Birmingham City University (BCU) has responded proactively to the evolving landscape of generative artificial intelligence (GenAI) in higher education by creating written guidance and workshops for staff and students. This initiative addresses prevalent concerns surrounding AI in education and is rooted in a sector report assessing the positions of higher education institutions (HEIs) and offering guidance on effective integration of GenAI use in teaching and assessment. Our project carries several impactful aspects, including written guidance and workshops for staff and students, which reflects BCU’s dedication to addressing common concerns surrounding GenAI in education. Furthermore, the multi-agency collaborative approach brings diverse perspectives to the table. The different perspectives enable us to tailor our learner development offer through knowledge of curriculum and assessment in a particular discipline and whether that discipline is more susceptible to GenAI use than others. The insights from learner developers in creating student guidelines ensure the project is attuned to the needs and concerns of the student body, promoting a student-centred approach. This session focused on an institutional take on GenAI through the collaborative development of staff and student guidance at BCU. The presentation outlined the new AI staff and student guidance project and discussed how effective integration of GenAI education has been implemented at BCU. It also noted positive influences of GenAI through our new staff and student workshops which bring together existing and new knowledge around academic integrity and assessment literacy. Finally, it highlighted next steps around AI integration for the Education Development Service at BCU.

  • Research Article
  • Cite Count Icon 31
  • 10.53761/54fs5e77
Higher Education’s Generative Artificial Intelligence Paradox: The Meaning of Chatbot Mania
  • Apr 19, 2024
  • Journal of University Teaching and Learning Practice
  • Juergen Rudolph + 2 more

Higher education is currently under a significant transformation due to the emergence of generative artificial intelligence (GenAI) technologies, the hype surrounding GenAI and the increasing influence of educational technology business groups over tertiary education. This commentary, prepared for the Special Issue of the Journal of University Teaching & Learning Practice (JUTLP) on “Enhancing student engagement using Artificial Intelligence (AI) and chatbots,” delves into the complex landscape of opportunities and threats that AI chatbots, including ChatGPT, introduce to the realm of higher education. We argue that while GenAI offers promise in enhancing pedagogy, research, administration, and student support, concerns around academic integrity, labour displacement, embedded biases, environmental sustainability, increased commercialisation, and regulatory gaps necessitate a critical approach. Our commentary advocates for the development of critical AI literacy among educators and students, emphasising the necessity to foster an environment of responsible innovation and informed use of AI. We posit that the successful integration of AI in higher education must be grounded in the principles of ethics, equity, and the prioritisation of educational aims and human values. By offering a critical and nuanced exploration of these issues, our commentary aims to contribute to the ongoing discourse on how higher education institutions can navigate the rise of GenAI, ensuring that technological advancements benefit all stakeholders while upholding core academic values.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.14742/apubs.2024.1218
Gen AI and student perspectives of use and ambiguity
  • Nov 11, 2024
  • ASCILITE Publications
  • Tim Fawns + 17 more

Advances in generative artificial intelligence (GenAI) have created uncertainties and tensions in higher education, particularly concerning learning, equity and quality. Despite emerging empirical research, much current policy is based on assumptions about how and why students are using GenAI. This Pecha Kucha reports on 20 online focus groups involving 79 students from four Australian universities. Each focus group represents a mix of disciplines and levels of study (including undergraduate and postgraduate). We conducted reflexive thematic analysis, adopting a relational view of AI (Bearman & Ajjawi, 2023) that supports a nuanced examination of how AI uses are enacted, understood, and contested within educational settings. Our study shows that students use GenAI in diverse and complex ways and their beliefs about GenAI contain ambiguity, contradictions, and tensions. In this pecha kucha we focus on five interrelated tensions, identified across participants, and selected as particularly significant and challenging for educators. The salience of these tensions varied across participants but, together, they paint a complex picture of student engagement with GenAI. Tension 1 is between student perceptions of AI in terms of enhanced efficiency and concerns about academic integrity. Students reported that GenAI tools could speed up writing, editing, summarising, and simplifying complex materials. However, many also feared that short-cuts and efficiencies could lead to accusations of cheating. Tension 2 is between widespread adoption of GenAI tools and ambiguous policy around acceptable use. Many students used a diverse range of GenAI tools, yet a number of participants voiced uncertainty about allowable use of GenAI in assessments. A perceived lack of clear and detailed guidance from universities created confusion and anxiety, and the development of personal rules to avoid accusations of academic misconduct. Tension 3 is between empowerment and dependency. AI tools were sometimes seen as reducing inequalities (e.g. for international students or those requiring language support). On the other hand, some students expressed concerns about becoming dependent on GenAI tools where tasks were made too easy, undermining learning and skill development. Tension 4 is between access and equity. Closely related to tension 2, here, the reduction of barriers to academic writing and accessing educational resources is contrasted with concerns around exacerbating inequalities due to variation in access and support. These concerns are amplified through diversity of engagement, beliefs of students and educators around acceptability, and contextual pressures (e.g. fear of being left behind, time pressures, the perceived stakes of assessment). Tension 5 is between beliefs about deepened engagement with learning materials and reduced quality or accuracy of GenAI output. Some students reported that GenAI tools could provide useful perspectives on resources or simplify complex texts. However, many voiced frustration that GenAI tools sometimes provided incorrect information, required verification or “missed the point”, which could lead to significant additional work. These tensions highlight areas where students need additional support and guidance. The overlaps and entanglements of these tensions make their navigation in higher education particularly complex. These findings suggest practical implications for educators, policymakers, and institutions. For instance, to better support students, institutions should continue to develop clear, context-sensitive guidelines that resolve ambiguities around acceptable use (Tensions 1 and 2) and provide concrete strategies to balance the benefits of efficiency with concerns over academic integrity and dependency (Tensions 1 and 3). Additionally, efforts should be made to ensure equitable access to GenAI tools and support (Tension 4) while helping students critically assess the quality of AI-generated content (Tension 5).

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.