Patient and clinician engagement with generative artificial intelligence (GenAI): A scoping review of implications for patient-centered communication.

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Patient and clinician engagement with generative artificial intelligence (GenAI): A scoping review of implications for patient-centered communication.

Similar Papers
  • Research Article
  • 10.15758/ajk.2026.28.1.58
Generative and Large-Scale Artificial Intelligence in Exercise and Sports Medicine: A Narrative Review
  • Jan 31, 2026
  • The Asian Journal of Kinesiology
  • Bokyoung Kim + 3 more

Generative artificial intelligence (AI), particularly large language models (LLMs) such as ChatGPT, has rapidly advanced in capability and accessibility, creating novel paradigms for personalized healthcare. In exercise and sports medicine, where clinical decision-making necessitates the integration of complex physiological data, individualized programming, and patient-centered communication, generative AI offers transformative potential for workflow augmentation. This narrative review synthesizes current applications, strengths, and limitations across seven core domains: (1) personalized exercise prescription, (2) performance enhancement and training support, (3) clinical rehabilitation and disease management, (4) lifestyle modification, (5) education and communication, (6) injury prevention, and (7) data analytics. LLMs demonstrated the ability to generate structured exercise prescriptions and rehabilitation protocols with moderate to high guideline compliance across cardiac and musculoskeletal rehabilitation contexts, while patient education content achieved favorable readability and clinical relevance ratings. Furthermore, methodological advancements such as prompt engineering and wearable-integrated closed-loop systems have enhanced personalization and real-time adaptability. In the domain of patient communication, generative AI tools produced readable educational materials with high factual consistency, although challenges persist regarding comorbidity screening, individualized safety verification, and cultural-linguistic contextualization. Ultimately, generative AI is poised to function as a first-draft accelerator and productivity amplifier within exercise and sports medicine. However, mandatory expert oversight, rigorous clinical validation, and robust governance frameworks remain essential prerequisites for the safe and effective integration of this approach into frontline clinical practice.

  • Research Article
  • Cite Count Icon 36
  • 10.2196/53466
Developing Medical Education Curriculum Reform Strategies to Address the Impact of Generative AI: Qualitative Study.
  • Nov 30, 2023
  • JMIR Medical Education
  • Ikuo Shimizu + 11 more

Generative artificial intelligence (GAI), represented by large language models, have the potential to transform health care and medical education. In particular, GAI's impact on higher education has the potential to change students' learning experience as well as faculty's teaching. However, concerns have been raised about ethical consideration and decreased reliability of the existing examinations. Furthermore, in medical education, curriculum reform is required to adapt to the revolutionary changes brought about by the integration of GAI into medical practice and research. This study analyzes the impact of GAI on medical education curricula and explores strategies for adaptation. The study was conducted in the context of faculty development at a medical school in Japan. A workshop involving faculty and students was organized, and participants were divided into groups to address two research questions: (1) How does GAI affect undergraduate medical education curricula? and (2) How should medical school curricula be reformed to address the impact of GAI? The strength, weakness, opportunity, and threat (SWOT) framework was used, and cross-SWOT matrix analysis was used to devise strategies. Further, 4 researchers conducted content analysis on the data generated during the workshop discussions. The data were collected from 8 groups comprising 55 participants. Further, 5 themes about the impact of GAI on medical education curricula emerged: improvement of teaching and learning, improved access to information, inhibition of existing learning processes, problems in GAI, and changes in physicians' professionality. Positive impacts included enhanced teaching and learning efficiency and improved access to information, whereas negative impacts included concerns about reduced independent thinking and the adaptability of existing assessment methods. Further, GAI was perceived to change the nature of physicians' expertise. Three themes emerged from the cross-SWOT analysis for curriculum reform: (1) learning about GAI, (2) learning with GAI, and (3) learning aside from GAI. Participants recommended incorporating GAI literacy, ethical considerations, and compliance into the curriculum. Learning with GAI involved improving learning efficiency, supporting information gathering and dissemination, and facilitating patient involvement. Learning aside from GAI emphasized maintaining GAI-free learning processes, fostering higher cognitive domains of learning, and introducing more communication exercises. This study highlights the profound impact of GAI on medical education curricula and provides insights into curriculum reform strategies. Participants recognized the need for GAI literacy, ethical education, and adaptive learning. Further, GAI was recognized as a tool that can enhance efficiency and involve patients in education. The study also suggests that medical education should focus on competencies that GAI hardly replaces, such as clinical experience and communication. Notably, involving both faculty and students in curriculum reform discussions fosters a sense of ownership and ensures broader perspectives are encompassed.

  • Research Article
  • 10.1108/dts-08-2025-0255
User readiness and technology adoption in AI-driven smart cities: a systematic review of generative and predictive models for advancing the SDGs
  • Dec 4, 2025
  • Digital Transformation and Society
  • Nuning Kristiani + 3 more

Purpose This study examines the integration of generative and predictive artificial intelligence (AI) models within smart cities, focusing on how user readiness and technology adoption influence their contribution to sustainable urban development and governance. Design/methodology/approach The study applies a systematic literature review following PRISMA guidelines and synthesizes evidence from 50 peer-reviewed studies (2018–2025) indexed in Scopus and Web of Science. It combines bibliometric mapping using VOSviewer with thematic analysis to examine the drivers, barriers and governance mechanisms shaping the adoption of generative, predictive and hybrid applications in urban contexts. Findings Generative AI fosters participatory engagement, citizen co-design and interactive simulations, advancing SDG 11 (Sustainable Cities and Communities) and SDG 4 (Quality Education) through enhanced digital literacy and inclusive planning. Predictive AI improves operational efficiency, forecasting accuracy and data-driven policymaking, supporting SDG 9 (Industry, Innovation and Infrastructure) and SDG 13 (Climate Action) by promoting sustainable resource use and climate-resilient management. Hybrid AI integrates these strengths, addressing both social and operational aspects of smart city development and aligning with SDG 17 (Partnerships for the Goals) through cross-sector collaboration and shared governance. Collectively, these models contribute to broader sustainability goals, including SDGs 3, 7 and 12. Research limitations/implications This review acknowledges several key limitations. Reliance on Scopus and Web of Science may exclude regionally significant or domain-specific studies not indexed in these databases. The focus on English-language publications introduces potential language bias, possibly overlooking relevant research from non-English-speaking regions. Restricting the timeframe to 2018–2025 captures recent developments but may omit earlier foundational work or the most recent studies not yet indexed. Differences in research design, policy contexts and sample characteristics also affect comparability and limit generalizability. Future research should broaden data sources, include multilingual literature and adopt mixed-methods and longitudinal approaches to enhance contextual diversity and empirical robustness. Practical implications The findings provide practical guidance for policymakers, urban planners and technology developers to design AI governance systems that are transparent, accountable and aligned with the SDGs. Integrating generative and predictive AI can enhance operational efficiency, support participatory planning and promote responsible decision-making. The findings inform the development of adaptive policy frameworks that advance SDG 9 (Industry, Innovation and Infrastructure), SDG 11 (Sustainable Cities and Communities) and SDG 13 (Climate Action) through digital literacy initiatives, cross-sector collaboration and data-informed management. Strengthening these practices enables cities to translate AI’s potential into tangible contributions to inclusive and sustainable urban transformation. Social implications Integrating user readiness and digital literacy into AI adoption is essential for building inclusive and trustworthy smart cities. These efforts support SDG 4 (Quality Education), SDG 10 (Reduced Inequalities) and SDG 16 (Peace, Justice and Strong Institutions). Generative AI encourages citizen participation and collaborative planning, while predictive AI improves service accessibility and data-informed governance. Promoting ethical awareness and community engagement helps narrow digital divides and address bias. Collectively, these elements advance SDG 11 (Sustainable Cities and Communities) and SDG 17 (Partnerships for the Goals) by fostering socially responsive and transparent AI-driven urban development. Originality/value This review is among the first to integrate perspectives on user readiness and technology adoption with comparative insights into generative and predictive AI in smart cities. It advances understanding of how AI-driven urban innovation supports inclusivity, efficiency and sustainability, while outlining policy directions and a future research agenda for equitable and transparent AI governance.

  • Research Article
  • 10.14742/ajet.10549
“Is this a trap?”: Student teachers’ perceptions and adoption of GenAI in assessments in three teacher education courses
  • Feb 19, 2026
  • Australasian Journal of Educational Technology
  • Tracy X P Zou + 3 more

Generative artificial intelligence (GenAI) poses unprecedented challenges and opportunities for assessment in universities. Existing studies that explore students’ adoption of GenAI in assessment show mixed and, to some extent, contradictory findings. Some studies have found optimistic views on GenAI, while others have highlighted significant concerns among students. This study aimed to explore students’ interactions with GenAI in completing non-exam assessments using a socio-technical view that recognises the sociocultural and technological factors influencing students’ behaviours. We sampled three teacher education courses that sought to embed the use of GenAI in the assessment. A mixed-methods approach was adopted, which involved data collected from a survey (N = 85), student interviews (N = 11), course materials and a declaration of GenAI use in students’ submitted assignments (N = 158). Our findings indicate that approximately two-thirds of the students decided not to adopt GenAI when allowed, and that the assessment design, the perceived value of the assessment, students’ self-confidence and concerns about being wrongly accused of plagiarism were the most frequently cited reasons. This study shows the importance of consistent assessment policies and effective communication. Moreover, it is important for instructors to have a programme-level view when designing GenAI-related assessment policies. Implications for practice or policy: Effective communication with students about what GenAI usage is or is not allowed in assessment is critical to avoid misunderstanding. Personalising and/or contextualising assessments helps reduce students’ reliance on GenAI. Course leaders should consider the overall policy and context for using GenAI in assessment beyond their own courses.

  • Research Article
  • Cite Count Icon 7
  • 10.3390/jcm14020571
Evaluation of Advanced Artificial Intelligence Algorithms' Diagnostic Efficacy in Acute Ischemic Stroke: A Comparative Analysis of ChatGPT-4o and Claude 3.5 Sonnet Models.
  • Jan 17, 2025
  • Journal of clinical medicine
  • Mustafa Koyun + 1 more

Background/Objectives: Acute ischemic stroke (AIS) is a leading cause of mortality and disability worldwide, with early and accurate diagnosis being critical for timely intervention and improved patient outcomes. This retrospective study aimed to assess the diagnostic performance of two advanced artificial intelligence (AI) models, Chat Generative Pre-trained Transformer (ChatGPT-4o) and Claude 3.5 Sonnet, in identifying AIS from diffusion-weighted imaging (DWI). Methods: The DWI images of a total of 110 cases (AIS group: n = 55, healthy controls: n = 55) were provided to the AI models via standardized prompts. The models' responses were compared to radiologists' gold-standard evaluations, and performance metrics such as sensitivity, specificity, and diagnostic accuracy were calculated. Results: Both models exhibited a high sensitivity for AIS detection (ChatGPT-4o: 100%, Claude 3.5 Sonnet: 94.5%). However, ChatGPT-4o demonstrated a significantly lower specificity (3.6%) compared to Claude 3.5 Sonnet (74.5%). The agreement with radiologists was poor for ChatGPT-4o (κ = 0.036; %95 CI: -0.013, 0.085) but good for Claude 3.5 Sonnet (κ = 0.691; %95 CI: 0.558, 0.824). In terms of the AIS hemispheric localization accuracy, Claude 3.5 Sonnet (67.2%) outperformed ChatGPT-4o (32.7%). Similarly, for specific AIS localization, Claude 3.5 Sonnet (30.9%) showed greater accuracy than ChatGPT-4o (7.3%), with these differences being statistically significant (p < 0.05). Conclusions: This study highlights the superior diagnostic performance of Claude 3.5 Sonnet compared to ChatGPT-4o in identifying AIS from DWI. Despite its advantages, both models demonstrated notable limitations in accuracy, emphasizing the need for further development before achieving full clinical applicability. These findings underline the potential of AI tools in radiological diagnostics while acknowledging their current limitations.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 35
  • 10.1007/s10956-024-10104-0
Pixels and Pedagogy: Examining Science Education Imagery by Generative Artificial Intelligence
  • Mar 15, 2024
  • Journal of Science Education and Technology
  • Grant Cooper + 1 more

The proliferation of generative artificial intelligence (GenAI) means we are witnessing transformative change in education. While GenAI offers exciting possibilities for personalised learning and innovative teaching methodologies, its potential for reinforcing biases and perpetuating stereotypes poses ethical and pedagogical concerns. This article aims to critically examine the images produced by the integration of DALL-E 3 and ChatGPT, focusing on representations of science classrooms and educators. Applying a capital lens, we analyse how these images portray forms of culture (embodied, objectified and institutionalised) and explore if these depictions align with, or contest, stereotypical representations of science education. The science classroom imagery showcased a variety of settings, from what the GenAI described as vintage to contemporary. Our findings reveal the presence of stereotypical elements associated with science educators, including white-lab coats, goggles and beakers. While the images often align with stereotypical views, they also introduce elements of diversity. This article highlights the importance for ongoing vigilance about issues of equity, representation, bias and transparency in GenAI artefacts. This study contributes to broader discourses about the impact of GenAI in reinforcing or dismantling stereotypes associated with science education.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 163
  • 10.3390/technologies11020044
How to Bell the Cat? A Theoretical Review of Generative Artificial Intelligence towards Digital Disruption in All Walks of Life
  • Mar 17, 2023
  • Technologies
  • Subhra Mondal + 2 more

Generative Artificial Intelligence (GAI) has brought revolutionary changes to the world, enabling businesses to create new experiences by combining virtual and physical worlds. As the use of GAI grows along with the Metaverse, it is explored by academics, researchers, and industry communities for its endless possibilities. From ChatGPT by OpenAI to Bard AI by Google, GAI is a leading technology in physical and virtual business platforms. This paper focuses on GAI’s economic and societal impact and the challenges it poses. Businesses must rethink their operations and strategies to create hybrid physical and virtual experiences using GAI. This study proposes a framework that can help business managers develop effective strategies to enhance their operations. It analyzes the initial applications of GAI in multiple sectors to promote the development of future customer solutions and explores how GAI can help businesses create new value propositions and experiences for their customers, and the possibilities of digital communication and information technology. A research agenda is proposed for developing GAI for business management to enhance organizational efficiency. The results highlight a healthy conversation on the potential of GAI in various business sectors to improve customer experience.

  • Research Article
  • Cite Count Icon 6
  • 10.1007/s12178-025-09961-y
A Current Review of Generative AI in Medicine: Core Concepts, Applications, and Current Limitations.
  • Apr 30, 2025
  • Current reviews in musculoskeletal medicine
  • Pouria Rouzrokh + 6 more

This review aims to offer a foundational overview of Generative Artificial Intelligence (AI) for healthcare professionals without an engineering background. It seeks to aid their understanding of Generative AI's current capabilities, applications, and limitations within the medical field. Generative AI models, distinct from discriminative models, are designed to create novel synthetic data. Key model families discussed include diffusion models for generating images and videos, Large Language Models (LLMs) for text, and Large Multimodal Models (LMMs) capable of processing multiple data types. Recent applications in healthcare are diverse, encompassing general uses like generating synthetic medical images, automating clinical documentation, and creating synthetic audio/video for training. More specialized applications include leveraging Generative AI models as backbones for diagnostic aids, enhancing information retrieval through Retrieval-Augmented Generation (RAG) pipelines, and coordinating multiple AI agents in complex workflows. Generative AI holds significant transformative potential in medicine, enhancing capabilities across imaging, documentation, education, and decision support. However, its integration faces substantial challenges, including models' knowledge limitations, the risk of generating incorrect or uncertain "hallucinated" outputs, inherent biases from training data, difficulty in interpreting model reasoning ("black box" nature), and navigating complex regulatory and ethical issues. This review offers a balanced perspective, acknowledging both the promise and the hurdles. While Generative AI is unlikely to fully replace physicians, understanding and leveraging these technologies will be crucial for medical professionals navigating the evolving healthcare landscape.

  • Book Chapter
  • Cite Count Icon 4
  • 10.70593/978-81-981367-8-7_1
Applications of ChatGPT and generative artificial intelligence in transforming the future of various business sectors
  • Oct 28, 2024
  • Dimple Patil + 2 more

ChatGPT and generative artificial intelligence have transformed industry processes, decision-making, and customer engagement across multiple industries. ChatGPT and generative AI applications transform healthcare, finance, marketing, education, and customer service, as this chapter shows. ChatGPT uses artificial intelligence (AI) and machine learning (ML) models for real-time data analysis, personalized interactions, and automation, improving operational efficiency and user experiences. AI improves fraud detection and financial forecasting in finance and diagnostic support and patient communication in healthcare. Generative AI enables hyper-personalized campaigns and content creation at scale in marketing and personalised tutoring and content adaptation in education. Automated, contextually responsive chatbots from generative AI models improve customer satisfaction and lower operational costs. As these technologies become essential to business, ethical issues like data privacy, bias mitigation, and AI transparency remain. This chapter emphasizes the need for strategic AI integration, suggesting that businesses that invest in responsible and ethical AI usage are better positioned to leverage generative AI's transformative potential—ensuring sustainable growth and competitive advantage in the changing digital landscape.

  • Research Article
  • 10.56557/jgembr/2025/v17i39877
AI-Augmented Agility: A Comprehensive Review of Generative AI Applications in Agile Project Management
  • Oct 24, 2025
  • Journal of Global Economics, Management and Business Research
  • Amienye Babatunde Omo Enabulele + 3 more

This article presents a narrative literature review of the emerging intersection between Generative Artificial Intelligence (GenAI) and Agile Project Management (APM). Using purposive, iterative searches across academic and practitioner sources, we screen for relevance to GenAI applications along the Agile lifecycle (planning, backlog refinement, estimation, development, testing, and retrospectives) and synthesize findings through a concept-centric, thematic analysis. The paper makes three contributions: (1) an integrative GenAI–APM alignment framework that maps core GenAI capabilities (e.g., requirements elaboration, code and test generation, risk sensing, knowledge summarization) to Agile roles, ceremonies, and artifacts; (2) an evidence-weighted assessment of opportunities (speed, decision support, collaboration) and risks (bias, privacy, model drift, over-reliance), with associated governance controls; and (3) a research agenda with testable propositions on effectiveness, human–AI teaming, measurement, compliance, and adoption barriers. Scholarly implications include clearer constructs and operational definitions to support cumulative empirical work. Practical implications include actionable guidance for PMOs and Scrum teams on where to pilot GenAI, how to measure value, and how to implement safeguards (data governance, responsible-AI checklists, and role/skill adjustments). By clarifying method, contribution, and significance, the review consolidates a fragmented discourse and offers a roadmap for rigorous research and responsible deployment of GenAI in Agile settings.

  • Research Article
  • Cite Count Icon 24
  • 10.1093/polsoc/puaf001
Governance of Generative AI
  • Jan 4, 2025
  • Policy and Society
  • Araz Taeihagh

The rapid and widespread diffusion of generative artificial intelligence (AI) has unlocked new capabilities and changed how content and services are created, shared, and consumed. This special issue builds on the 2021 Policy and Society special issue on the governance of AI by focusing on the legal, organizational, political, regulatory, and social challenges of governing generative AI. This introductory article lays the foundation for understanding generative AI and underscores its key risks, including hallucination, jailbreaking, data training and validation issues, sensitive information leakage, opacity, control challenges, and design and implementation risks. It then examines the governance challenges of generative AI, such as data governance, intellectual property concerns, bias amplification, privacy violations, misinformation, fraud, societal impacts, power imbalances, limited public engagement, public sector challenges, and the need for international cooperation. The article then highlights a comprehensive framework to govern generative AI, emphasizing the need for adaptive, participatory, and proactive approaches. The articles in this special issue stress the urgency of developing innovative and inclusive approaches to ensure that generative AI development is aligned with societal values. They explore the need for adaptation of data governance and intellectual property laws, propose a complexity-based approach for responsible governance, analyze how the dominance of Big Tech is exacerbated by generative AI developments and how this affects policy processes, highlight the shortcomings of technocratic governance and the need for broader stakeholder participation, propose new regulatory frameworks informed by AI safety research and learning from other industries, and highlight the societal impacts of generative AI.

  • Research Article
  • Cite Count Icon 17
  • 10.1016/j.jval.2024.10.3846
Generative AI for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations - an ISPOR Working Group Report
  • Feb 1, 2025
  • Value in Health
  • Rachael L Fleurence + 7 more

Generative AI for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations - an ISPOR Working Group Report

  • Supplementary Content
  • 10.2196/71125
Applications, Challenges, and Prospects of Generative Artificial Intelligence Empowering Medical Education: Scoping Review
  • Oct 23, 2025
  • JMIR Medical Education
  • Yuhang Lin + 8 more

BackgroundNowadays, generative artificial intelligence (GAI) drives medical education toward enhanced intelligence, personalization, and interactivity. With its vast generative abilities and diverse applications, GAI redefines how educational resources are accessed, teaching methods are implemented, and assessments are conducted.ObjectiveThis study aimed to review the current applications of GAI in medical education; analyze its opportunities and challenges; identify its strengths and potential issues in educational methods, assessments, and resources; and capture GAI’s rapid evolution and multidimensional applications in medical education, thereby providing a theoretical foundation for future practice.MethodsThis scoping review used PubMed, Web of Science, and Scopus to analyze literature from January 2023 to October 2024, focusing on GAI applications in medical education. Following PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines, 5991 articles were retrieved, with 1304 duplicates removed. The 2-stage screening (title or abstract and full-text review) excluded 4564 articles and a supplementary search included 8 articles, yielding 131 studies for final synthesis. We included (1) studies addressing GAI’s applications, challenges, or future directions in medical education, (2) empirical research, systematic reviews, and meta-analyses, and (3) English-language articles. We excluded commentaries, editorials, viewpoints, perspectives, short reports, or communications with low levels of evidence, non-GAI technologies, and studies centered on other fields of medical education (eg, nursing). We integrated quantitative analysis of publication trends and Human Development Index (HDI) with thematic analysis of applications, technical limitations, and ethical implications.ResultsAnalysis of 131 articles revealed that 74.0% (n=97) originated from countries or regions with very high HDI, with the United States contributing the most (n=33); 14.5% (n=19) were from high HDI countries, 5.3% (n=7) from medium HDI countries, and 2.2% (n=3) from low HDI countries, with 3.8% (n=5) involving cross-HDI collaborations. ChatGPT was the most studied GAI model (n=119), followed by Gemini (n=22), Copilot (n=11), Claude (n=6), and LLaMA (n=4). Thematic analysis indicated that GAI applications in medical education mainly embody the diversification of educational methods, scientific evaluation of educational assessments, and dynamic optimization of educational resources. However, it also highlighted current limitations and potential future challenges, including insufficient scene adaptability, data quality and information bias, overreliance, and ethical controversies.ConclusionGAI application in medical education exhibits significant regional disparities in development, and model research statistics reflect researchers’ certain usage preferences. GAI holds potential for empowering medical education, but widespread adoption requires overcoming complex technical and ethical challenges. Grounded in symbiotic agency theory, we advocate establishing the resource-method-assessment tripartite model, developing specialized models and constructing an integrated system of general large language models incorporating specialized ones, promoting resource sharing, refining ethical governance, and building an educational ecosystem fostering human-machine symbiosis, enabling deep tech-humanism integration and advancing medical education toward greater efficiency and human-centeredness.

  • Research Article
  • Cite Count Icon 103
  • 10.1021/acs.jchemed.3c00063
Was This Title Generated by ChatGPT? Considerations for Artificial Intelligence Text-Generation Software Programs for Chemists and Chemistry Educators
  • Mar 20, 2023
  • Journal of Chemical Education
  • Mary E Emenike + 1 more

Generative artificial intelligence (GAI) is here; now what? In this commentary, we discuss the potential impacts of GAI text-based systems for the chemistry community. The recent launch of ChatGPT, a free GAI text-based system by OpenAI, has sparked concerns regarding academic integrity and student assessment across all educational levels. However, the capabilities of these systems will impact more than the teaching and learning of chemistry; GAI systems can serve students, faculty, and administrators for teaching and learning, research, and professional activities. Herein we explore various ways students and faculty might use GAI systems, identify potential benefits and risks, and consider equity and accessibility issues. We hope to inspire productive discussions on leveraging GAI technology’s capabilities while recognizing its limitations.

  • Research Article
  • 10.1111/faf.70037
A Prospectus on Generative Artificial Intelligence in Marine Ecosystem Modelling
  • Jan 19, 2026
  • Fish and Fisheries
  • Scott Spillias

Marine ecosystem modelling faces increasing demands for rapid development and deployment to address urgent environmental challenges, yet technical complexity and time‐intensive processes often constrain timely insights for management decisions. This prospectus synthesises current applications and outlines future research directions for integrating Generative Artificial Intelligence (GenAI) into marine ecosystem modelling while maintaining scientific rigour. I present a structured framework for integrating GenAI across eight interconnected components of the modelling cycle: model scoping, data gathering, conceptual framework development, model development, model execution, validation and calibration, reporting and stakeholder engagement. Through analysis of current applications and emerging research, I demonstrate how GenAI can automate routine tasks, democratise access to sophisticated modelling approaches, and improve model quality. Achieving success will require overcoming persistent challenges, including data limitations, institutional barriers and ethical concerns. I propose a research agenda addressing three streams: capability assessment to systematically evaluate GenAI's potential in marine ecosystem modelling; avenues for ensuring scientific integrity and reliability; and socio‐technical integration to address ethical and institutional challenges. While GenAI offers the potential to enhance modelling, a human‐centered approach is essential, where GenAI augments, rather than replaces, human expertise in model validation, interpretation of results and ensuring sustainable management outcomes. To support readers new to this space, a primer in the supporting information outlines practical considerations for accessing GenAI tools, from cloud‐based services to locally‐run models and their implications for privacy, reproducibility and computational requirements.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.