Published in last 50 years
Articles published on Generation AI
- New
- Research Article
- 10.1080/15313220.2025.2584269
- Nov 5, 2025
- Journal of Teaching in Travel & Tourism
- Dave Roberts + 1 more
ABSTRACT This opinion paper explores the evolving role of revenue management (RM) in hospitality and tourism education and argues for a strategic rethinking of how RM is taught at the undergraduate level. Drawing on the authors experience and a review of academic programs and literature, the authors identify a persistent gap between industry expectations and academic curricula. The paper advocates for RM to be a required course, taught through a commercial strategy lens with a strong emphasis on applied analytics, optimization, and artificial intelligence, including generative AI. It also highlights the importance of critical thinking and data interpretation as essential competencies. Finally, we introduce RevME (Revenue Management and Analytics Educators), an international association supporting faculty through collaborative resources, including a dynamic textbook and training initiatives. Overall, we argue that the goal should be to align RM education with industry needs and better prepare students for data-driven decision-making in a rapidly transforming hospitality and tourism landscape.
- New
- Research Article
- 10.1108/lthe-03-2025-0014
- Nov 4, 2025
- Learning and Teaching in Higher Education: Gulf Perspectives
- Oualid Abidi + 2 more
Purpose This study examines how generative AI tools affect business students’ academic performance by investigating whether flexible AI policies promote deeper learning, enhance self-efficacy and facilitate tacit knowledge acquisition in a Middle Eastern context, while ensuring efficiency and academic integrity. Design/methodology/approach A qualitative, exploratory study observed 20 final-year business students in Kuwait during five in-class activities using generative AI tools. Semi-structured interviews complemented the researcher’s observations. Thematic analysis revealed patterns in benefits, challenges and learning processes, leading to the development of the AI-powered learning loop framework to explain academic performance outcomes. Findings The study indicates that generative AI tools assist students by saving time, organizing ideas and enhancing understanding. While they reduce cognitive load and boost confidence, concerns about accuracy and ethical implications remain. A structured AI policy can promote responsible use and improve academic performance, supporting the proposed AI-powered learning loop model. Research limitations/implications The exploratory design and small sample size limit the findings to a private business college in Kuwait, reducing generalizability to broader higher education. Despite relying on self-reported data, the AI-powered learning loop framework provides a basis for future validation and research across diverse contexts. Practical implications Higher education institutions can promote integrity by requiring students to explain and defend their AI-assisted work. This approach reduces academic dishonesty and enhances critical thinking. Faculty should integrate discussions and source validation into assessments. Clear AI policies from policymakers can alleviate student anxiety and foster essential skills for an AI-driven workplace. Originality/value This study explores the effects of tolerant AI policies in Middle Eastern higher education by introducing the AI-powered learning loop, a framework connecting cognitive load, self-efficacy, tacit knowledge and academic performance. It offers insights for academics and policymakers on responsibly integrating generative AI to enhance sustainable learning outcomes.
- New
- Research Article
- 10.1002/spe.70029
- Nov 4, 2025
- Software: Practice and Experience
- Haowei Cheng + 6 more
ABSTRACT Introduction Requirements engineering (RE) faces challenges due to the handling of increasingly complex software systems. These challenges can be addressed using generative artificial intelligence (GenAI). Given that GenAI‐based RE has not been systematically analyzed in detail, this review examines the related research, focusing on trends, methodologies, challenges, and future work directions. Methods A systematic methodology for paper selection, data extraction, and feature analysis is used to comprehensively review 238 articles published from 2019 to 2025 and available from major academic databases. Results Although generative pretrained transformer models dominate current applications (67.3% of studies), the research focus remains unevenly distributed across RE phases, with analysis (30.0%) and elicitation (22.1%) receiving the most attention and management (6.8%) remaining underexplored. Three core challenges—reproducibility (66.8%), hallucinations (63.4%), and interpretability (57.1%)—form a tightly interlinked triad affecting trust and consistency, and strong correlations ( co‐occurrence) indicate that these challenges must be addressed holistically. Industrial adoption remains nascent, with > 90% of studies corresponding to early‐stage development and only 1.3% reaching production‐level integration. Evaluation practices show maturity gaps, limited tool/dataset availability, and fragmented benchmarking approaches. Conclusions Despite the transformative potential of GenAI‐based RE, several barriers hinder its practical adoption. The strong correlations among core challenges demand specialized architectures targeting interdependencies rather than isolated solutions. The limited real‐world deployment reflects systemic bottlenecks in generalizability, data quality, and scalable evaluation methods. Successful adoption requires coordinated development across technical robustness, methodological maturity, and governance integration. A multiphase research roadmap emphasizing evaluation infrastructure strengthening, governance‐aware development, and industrial‐scale standardization is proposed.
- New
- Research Article
- 10.57264/cer-2025-0150
- Nov 4, 2025
- Journal of comparative effectiveness research
- Manuel Cossio + 1 more
Patient and public involvement in health technology assessment (HTA) has progressed from best practice to policy requirement, yet communication barriers persist. This perspective explores how plain language summaries (PLSs) and summaries of information for patients (SIPs) can enhance equity and transparency in HTA. Building on recent European regulatory developments and emerging research, it discusses the balance between accessibility, quality and feasibility. Generative artificial intelligence offers the potential to scale PLS and SIP production, but its responsible integration requires oversight, collaboration and a continued focus on equity and patient-centeredness within evolving HTA frameworks.
- New
- Research Article
- 10.7759/cureus.96034
- Nov 3, 2025
- Cureus
- Leo Morjaria + 3 more
Generative AI: A Disruptor to Health Professions Learner Assessment
- New
- Research Article
- 10.12737/2587-9103-2025-14-5-95-102
- Nov 3, 2025
- Scientific Research and Development. Modern Communication Studies
- L Malygina + 1 more
Introduction. The relevance of this study is driven by the need for a theoretical understanding of generative AI's impact on the television industry. The article argues that a systematic methodological framework is required to analyze these processes, which may not just enhance but fundamentally disrupt the industry. Aim. To substantiate the applicability of Clayton M. Christensen's theory of disruptive innovation for the systematic analysis and forecasting of the transformation of television broadcasting models in the era of generative AI. Methodology and research methods. The research methodology is based on the theory of disruptive innovation. Methods of theoretical analysis and conceptual modeling are used to interpret the characteristics of AI and develop market transformation scenarios. Results. It is proven that generative AI has the key characteristics of a disruptive innovation (cost reduction, democratization of access, new value propositions). The mechanisms of 'low-end' and 'new-market' disruption are analyzed. The study reveals that traditional broadcasters face the classic 'innovator's dilemma,' which hinders their ability to respond adequately to the threat. Scientific Novelty. For the first time, the theory of disruptive innovation is systematically applied to analyze the impact of generative AI on television, enabling a shift from describing technologies to explaining and forecasting media market dynamics. Practical Significance. The research findings provide a strategic tool for media managers to assess the risks and opportunities of AI and to make informed business decisions under conditions of high uncertainty.
- New
- Research Article
- 10.3389/fpubh.2025.1690119
- Nov 3, 2025
- Frontiers in Public Health
- Zhongyu Shi + 1 more
Toward emotional mediation: generative AI in art therapy for psychosocial health support
- New
- Research Article
- 10.3389/fdgth.2025.1653369
- Nov 3, 2025
- Frontiers in Digital Health
- Nafiz Fahad + 9 more
Generative artificial intelligence (G-AI) has moved from proof-of-concept demonstrations to practical tools that augment radiology, dermatology, genetics, drug discovery, and electronic-health-record analysis. This mini-review synthesizes fifteen studies published between 2020 and 2025 that collectively illustrate three dominant trends: data augmentation for imbalanced or privacy-restricted datasets, automation of expert-intensive tasks such as radiology reporting, and generation of new biomedical knowledge ranging from molecular scaffolds to fairness insights. Image-centric work still dominates, with GANs, diffusion models, and Vision-Language Models expanding limited datasets and accelerating diagnosis. Yet narrative (EHR) and molecular design domains are rapidly catching up. Despite demonstrated accuracy gains, recurring challenges persist: synthetic samples may overlook rare pathologies, large multimodal systems can hallucinate clinical facts, and demographic biases can be amplified. Robust validation, interpretability techniques, and governance frameworks therefore, remain essential before G-AI can be safely embedded in routine care.
- New
- Research Article
- 10.1055/a-2718-4633
- Nov 3, 2025
- Gesundheitswesen (Bundesverband der Arzte des Offentlichen Gesundheitsdienstes (Germany))
- Tim Kekeritz + 1 more
This study examined the extent to which generative artificial intelligence can be used for analyzing reports from the statutory accident insurance system. To this end, medical documents were evaluated using targeted prompts with both ChatGPT and a specially customized CustomGPT model. The results showed that simple tasks, such as extracting basic personal data or identifying missing causal links, were performed with high accuracy and a low error rate. However, when it came to more complex legal issues or the interpretation of contextual information, the models demonstrated limited reliability. The use of a tailored CustomGPT model did not yield a significant improvement in response quality compared to the standard version. In its current stage of development, the technology is not suitable for practical use in evaluating reports from the statutory accident insurance system. Future research should investigate newer versions of ChatGPT as well as alternative AI systems. It is expected that generative AI will soon be reliably applicable to the use cases explored in this study.
- New
- Research Article
- 10.3390/su17219793
- Nov 3, 2025
- Sustainability
- Hanan Sharif + 2 more
Deepfake-style AI tutors are emerging in online education, offering personalized and multilingual instruction while introducing risks to integrity, privacy, and trust. This study aims to understand their pedagogical potential and governance needs for responsible integration. A PRISMA-guided, systematic review of 42 peer-reviewed studies (2015–early 2025) was conducted from 362 screened records, complemented by semi-structured questionnaires with 12 assistant professors (mean experience = 7 years). Thematic analysis using deductive codes achieved strong inter-coder reliability (κ = 0.81). Four major themes were identified: personalization and engagement, detection challenges and integrity risks, governance and policy gaps, and ethical and societal implications. The results indicate that while deepfake AI tutors enhance engagement, adaptability, and scalability, they also pose risks of impersonation, assessment fraud, and algorithmic bias. Current detection approaches based on pixel-level artifacts, frequency features, and physiological signals remain imperfect. To mitigate these challenges, a four-pillar governance framework is proposed, encompassing Transparency and Disclosure, Data Governance and Privacy, Integrity and Detection, and Ethical Oversight and Accountability, supported by a policy checklist, responsibility matrix, and risk-tier model. Deepfake AI tutors hold promise for expanding access to education, but fairness-aware detection, robust safeguards, and AI literacy initiatives are essential to sustain trust and ensure equitable adoption. These findings not only strengthen the ethical and governance foundations for generative AI in higher education but also contribute to the broader agenda of sustainable digital education. By promoting transparency, fairness, and equitable access, the proposed framework advances the long-term sustainability of learning ecosystems and aligns with the United Nations Sustainable Development Goal 4 (Quality Education) through responsible innovation and institutional resilience.
- New
- Research Article
- 10.1007/s40319-025-01647-9
- Nov 3, 2025
- IIC - International Review of Intellectual Property and Competition Law
- Christophe Geiger + 2 more
Abstract The fair remuneration of authors and performers for the exploitation of their work is at the core of the rationales of copyright and related rights. Furthermore, the remuneration of creators benefits form a strong fundamental rights justification at the international and European level. However, the copyright system has since its inception poorly delivered on this objective, the revenues generated by the exploitation of creative works being still often unfairly distributed to authors and performers. Most recently, their revenues have also been affected by crises like the COVID-19 pandemic and the increasing use of generative AI technologies to output works potentially competing with those of human creators. The EU Digital Single Market Directive introduced for the first time in EU law general copyright-contract rules to protect the authors and performers in their contractual relations with derivate rightholders (Arts. 18–22 Directive (EU) 2019/790 (“CDSMD”)). However interesting and positive these rules are in theory, there are doubts that a “contractual-only” protection brings the expected results in practice since it requires the creators to turn against their producers to demand the revision of their agreement, which very often carries negative consequences for creators. National experiences with copyright contract rules have shown little results so far. Therefore, other mechanisms securing that a fair remuneration flows directly back to creators should urgently be considered. Effective implementation of the principle of appropriate and proportionate remuneration of authors and performers pursuant to Art. 18 CDSMD and, more generally, of the fair remuneration rationale of copyright law, thus must result from the combination of several mechanisms that cannot be easily circumvented and which secure efficient revenue flows to creators without burden them excessively with enforcement obligations. In this context, the article argues that wider use of remuneration rights appears the better way forward. Tracing the latest developments, it explores the policy options to implement the different types of remuneration rights (i.e., residual, other per se, and “limitation-based”). It analyzes several national experiences and pending CJEU referrals to identify clear principles for the development, at the EU and international level, of better-functioning remuneration systems for creators, thus securing that the copyright system can (finally) fulfil one of the main functions for which it has been established.
- New
- Research Article
- 10.1007/s11548-025-03524-9
- Nov 3, 2025
- International journal of computer assisted radiology and surgery
- Mario A Cypko + 6 more
Bayesian networks (BNs) are valuable for clinical decision support due to their transparency and interpretability. However, BN modelling requires considerable manual effort. This study explores how integrating large language models (LLMs) with retrieval-augmented generation (RAG) can improve BN modelling by increasing efficiency, reducing cognitive workload, and ensuring accuracy. We developed a web-based BN modelling service that integrates an LLM-RAG pipeline. A fine-tuned GTE-Large embedding model was employed for knowledge retrieval, optimised through recursive chunking and query expansion. To ensure accurate BN suggestions, we defined a causal structure for medical idioms by unifying existing BN frameworks. GPT-4 and Mixtral 8x7B were used to handle complex data interpretation and to generate modelling suggestions, respectively. A user study with four clinicians assessed usability, retrieval accuracy, and cognitive workload using NASA-TLX. The study demonstrated the system's potential for efficient and clinically relevant BN modelling. The RAG pipeline improved retrieval accuracy and answer relevance. Recursive chunking with the fine-tuned embedding model GTE-Large achieved the highest retrieval accuracy score(0.9). Query expansion and Hyde optimisation enhanced retrieval accuracy for semantic chunking(0.75 to 0.85). Responses maintained high faithfulness( 0.9). However, the LLM occasionally failed to adhere to predefined causal structures and medical idioms. All clinicians, regardless of BN experience, created comprehensive models within one hour. Experienced clinicians produced more complex models, but occasionally introduced causality errors, while less experienced users adhered more accurately to predefined structures. The tool reduced cognitive workload(2/7 NASA-TLX) and was described as intuitive, although workflow interruptions and minor technical issues highlighted areas for improvement. Integrating LLM-RAG into BN modelling enhances efficiency and accuracy. Future work may focus on automated preprocessing, refinements of the user interface, and extending the RAG pipeline with validation steps and external biomedical sources. Generative AI holds promise for expert-driven knowledge modelling.
- New
- Research Article
- 10.17507/tpls.1511.30
- Nov 3, 2025
- Theory and Practice in Language Studies
- Yang Yang + 2 more
The proliferation of generative AI tools such as ChatGPT has transformed feedback provision in EFL writing, offering scalable and immediate support to learners. However, learner engagement with AI-generated feedback remains highly variable, raising questions about the internal mechanisms that shape feedback uptake. This study investigates how feedback literacy predicts both the behavioral adoption and perceived usefulness of ChatGPT-generated feedback among EFL learners, while also examining whether perceived ease of use mediates this relationship. Data were collected from 51 Chinese university students through questionnaires and revision-based tasks across three ChatGPT-supported writing assignments. Results from linear regression and bootstrapped mediation analyses revealed that feedback literacy significantly predicted both successful feedback uptake (R² = .56) and perceived usefulness (R² = .42). Moreover, perceived ease of use partially mediated this relationship, suggesting a layered cognitive-affective mechanism underlying learners’ engagement with algorithmic feedback. These findings extend feedback literacy theory beyond interpersonal contexts to AI-mediated, non-dialogic writing environments. They also refine the Technology Acceptance Model by highlighting learner competence as a critical determinant of usability and value perceptions. Pedagogically, the study underscores the need to cultivate feedback literacy as a prerequisite for meaningful engagement with AI tools in writing instruction.
- New
- Research Article
- 10.1002/lpor.202500771
- Nov 2, 2025
- Laser & Photonics Reviews
- Chengzhuo Xia + 14 more
ABSTRACT Transposed convolution, crucial in large‐parameter generative models ranging from content creation to autonomous driving, imposes substantial demands on GPU memory and energy consumption in electronic processors. Electronic processors, fundamentally limited by the von Neumann architecture and further hindered by silicon‐based quantum tunneling effects, struggle to meet the stringent real‐time requirements of modern generative workloads. In contrast, optical computing—exploiting ultra‐wide bandwidth and ultra‐low power consumption—offers a promising alternative for high‐speed transposed convolution in next‐generation AI. Here, we introduce a high‐speed and reconfigurable photonic transposed convolution accelerator (PTCA). By interleaving wavelength, temporal, and spatial dimensions and leveraging an integrated Kerr microcomb for data‐dimension expansion, the PTCA achieves tera operations per second (TOPS) with 100% bit efficiency. Experiments demonstrate a processing speed of 1.026 TOPS, making it, to the best of our knowledge, the fastest reconfigurable PTCA to date. In Fashion‐MNIST reconstruction tasks, this system achieves a mean squared error (MSE) of 0.0062 without any additional post‐processing by electronic fully connected layers. Our work thus establishes a high‐speed, reconfigurable photonic paradigm for accelerating future generative AI.
- New
- Research Article
- 10.3390/educsci15111464
- Nov 2, 2025
- Education Sciences
- László Berényi + 2 more
The emergence of generative AI, particularly the widespread accessibility of ChatGPT, has led to challenges for higher education. The extent and manner of use are under debate. Local empirical investigations about the use and acceptance of ChatGPT contribute to effective policymaking. The study employs a specialized approach, utilizing an information system view based on the DeLone and McLean Information Systems Success Model as its theoretical framework. A survey was conducted to assess students’ opinions about ChatGPT regarding its usefulness in their studies. The model was tested using PLS-SEM with 466 Hungarian and Romanian higher education students. The model examined six constructs as information quality, system quality, service quality, use, user satisfaction, and net benefits. The results confirmed the effects of information quality and system quality on use and satisfaction, whereas service quality did not make a significant contribution. Satisfaction was found to be the key driver to use. The study contributes to a deeper understanding of AI acceptance in higher education and provides valuable considerations for policymaking. A data-oriented, task-focused policymaking is recommended over system-based regulation. Additionally, a comprehensive framework model is required for international comparisons, which combines information systems success and technology acceptance models.
- New
- Research Article
- 10.1016/j.nepr.2025.104612
- Nov 1, 2025
- Nurse education in practice
- Mingyan Shen + 2 more
Exploring nursing students' acceptance of RAG-enhanced GenAI through the AIDUA model: A qualitative study.
- New
- Research Article
- 10.1016/j.compbiomed.2025.111226
- Nov 1, 2025
- Computers in biology and medicine
- Nhung Hong Thi Duong + 2 more
Designing new hit series of JAK3 inhibitors using generative AI, reinforcement learning, and molecular dynamics.
- New
- Research Article
- 10.55593/ej.29115int
- Nov 1, 2025
- Teaching English as a Second or Foreign Language--TESL-EJ
- Jonna Marie Lim + 1 more
This paper explores the integration of ChatGPT into the L2 writing classroom as a tool for enhancing teacher feedback on student essays. Using a reflexive case study methodology, we examine how generative AI (GenAI) augments teacher feedback in areas such as thesis clarity, idea development, and grammatical accuracy. By combining ChatGPT’s rapid feedback with teachers’ contextual insights, we propose a blended feedback model that leverages both AI capabilities and teachers’ expertise for more personalized feedback. Results show that the teacher effectively utilizes ChatGPT by integrating AI-generated insights with her own to refine feedback, correct inaccuracies in ChatGPT’s feedback, and engage students in meaningful dialogues based on these AI-generated insights. These strategies highlight the blended feedback model’s potential to provide comprehensive and personalized feedback, which could deepen student engagement and enhance the quality of their writing. While ChatGPT proves beneficial for formative feedback, its effectiveness is greatly enhanced when combined with the expert knowledge of teachers, ensuring that the feedback remains relevant and tailored to individual student needs. Thus, we recommend further exploration of blended feedback models and additional strategies for utilizing GenAI to enhance the quality of teacher feedback, particularly in formative assessments in L2 writing classes.
- New
- Research Article
- 10.55593/ej.29115int3
- Nov 1, 2025
- Teaching English as a Second or Foreign Language--TESL-EJ
- Rhian Webb + 1 more
Due to the rapid emergence and use of generative artificial intelligence (GenAI) by English as a foreign language (EFL) students in higher education (HE), further research is required to understand English language teaching (ELT) teachers’ training needs to effectively manage digitally enhanced teaching and learning. This study identifies teachers’ needs by investigating Turkish ELT teachers’ reactions to their students’ self-reported GenAI usage. Our transcendental phenomenological research design ensured minimal author bias from the thematically analysed, qualitative, interview data from 21 Turkish undergraduate EFL students (B1-C1 level) and six Turkish ELT teachers. Analysis has revealed that students used ChatGPT (version 3.5) as a human collaborator to build content, clarify tasks, be a critical friend, organise ideas, enhance language, and obtain feedback, which they found motivating. However, teachers’ reactions to their students’ usage were inconsistent and exposed a need for unified teacher identity development that is shaped by GenAI literacy training and supported by institutional policies that address GenAI integration into curriculum design and assessment practices.
- New
- Research Article
- 10.1016/j.nedt.2025.106855
- Nov 1, 2025
- Nurse education today
- Yaqi Zhu + 4 more
Identifying highly correlated determinants influencing student nurses' behavioral intention of using generative artificial intelligence (Generative AI): A network analysis.