Enhancing regulatory affairs in the market placing of new medical devices: how LLMs like ChatGPT may support and simplify processes
The market placing of a medical device in compliance with the requirements of EU Regulation 2017/745 (Medical Device Regulation) demands advanced regulatory expertise and a high level of detail and depth, inevitably leading to significant human and time resources. In an era where Artificial Intelligence (AI) is already present in various aspects of daily life, the potential and opportunity to use AI tools in the scientific field, such as the CE marking process of medical devices, are being explored. This process consists of several phases and related activities, some of which have been chosen as significant examples to evaluate how and to what extent AI can add value in achieving their compliance. The article presents the overall results in terms of performance and reliability derived from generative AI tests using Large Language Models, such as ChatGPT, applied to some of the processes necessary for the market placing of a medical device. The method used focuses on the relationship between prompt quality and output quality, demonstrating the importance of prompt engineering in using these tools effectively alongside regulatory processes. It also emphasizes the need for end-users to have education, training, and understanding of the mechanisms of generative AI to optimize performance.
- Research Article
31
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (see The Effect of Open Access).
- Research Article
8
- 10.1287/ijds.2023.0007
- Apr 1, 2023
- INFORMS Journal on Data Science
How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
- Research Article
1
- 10.1016/j.clon.2025.103798
- May 1, 2025
- Clinical oncology (Royal College of Radiologists (Great Britain))
Artificial Intelligence in Health Care: A Rallying Cry for Critical Clinical Research and Ethical Thinking.
- Research Article
16
- 10.1162/daed_e_01897
- May 1, 2022
- Daedalus
Getting AI Right: Introductory Notes on AI & Society
- Discussion
6
- 10.1016/j.ebiom.2023.104672
- Jul 1, 2023
- eBioMedicine
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
- Research Article
1
- 10.1027/1015-5759/a000764
- Mar 1, 2023
- European Journal of Psychological Assessment
Measurement Does Not Take Place in a Legal Vacuum
- Research Article
2
- 10.1186/s40561-025-00406-0
- Aug 4, 2025
- Smart Learning Environments
As generative artificial intelligence (AI) tools and large language models (LLMs)-powered applications develop rapidly in the era of algorithms, it should be integrated thoughtfully to enhance English as a Foreign Language (EFL) teaching and learning without replacing learners’ critical thinking (CT). This study systematically analyzes the impact of generative AI tools and LLMs on language learners’ CT in EFL education using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework to identify, evaluate, and synthesize relevant studies from 2022 to 2025. A thorough review of 15 selected studies focuses on generative AI tools and LLMs’ dual nature, research methods, main focuses, theory and models, limitations and challenges, and future directions in the field based on Web of Science (WoS), SCOPUS, ERIC, ProQuest, and Google Scholar. The findings identified generative AI tools and LLMs possessed both the potential to nurture and the risk of hindering CT in EFL education. 66.67% of studies reported generative AI tools and LLMs’ positive role in CT, while 33.33% of studies reported its negative role in CT. Furthermore, 3 types of research methods, 3 key themes of research focus, and 4 groups of theoretical perspectives were examined. However, 4 kinds of limitations in this field remain, including research scope, user dependency, generative AI reliability, and pedagogical integration. Future research can focus on assessing long-term effects, broadening research scope, promoting responsible AI use, and refining pedagogical strategies. Finally, Limitations, implications and future direction of this study were discussed.
- Discussion
6
- 10.1016/j.ejmp.2021.05.008
- Mar 1, 2021
- Physica Medica
Focus issue: Artificial intelligence in medical physics.
- Research Article
11
- 10.1016/j.jclinepi.2025.111746
- May 1, 2025
- Journal of clinical epidemiology
Machine learning promises versatile help in the creation of systematic reviews (SRs). Recently, further developments in the form of large language models (LLMs) and their application in SR conduct attracted attention. We aimed at providing an overview of LLM applications in SR conduct in health research. We systematically searched MEDLINE, Web of Science, IEEEXplore, ACM Digital Library, Europe PMC (preprints), Google Scholar, and conducted an additional hand search (last search: February 26, 2024). We included scientific articles in English or German, published from April 2021 onwards, building upon the results of a mapping review that has not yet identified LLM applications to support SRs. Two reviewers independently screened studies for eligibility; after piloting, 1 reviewer extracted data, checked by another. Our database search yielded 8054 hits, and we identified 33 articles from our hand search. We finally included 37 articles on LLM support. LLM approaches covered 10 of 13 defined SR steps, most frequently literature search (n = 15, 41%), study selection (n = 14, 38%), and data extraction (n = 11, 30%). The mostly recurring LLM was Generative Pretrained Transformer (GPT) (n = 33, 89%). Validation studies were predominant (n = 21, 57%). In half of the studies, authors evaluated LLM use as promising (n = 20, 54%), one-quarter as neutral (n = 9, 24%) and one-fifth as nonpromising (n = 8, 22%). Although LLMs show promise in supporting SR creation, fully established or validated applications are often lacking. The rapid increase in research on LLMs for evidence synthesis production highlights their growing relevance. Systematic reviews are a crucial tool in health research where experts carefully collect and analyze all available evidence on a specific research question. Creating these reviews is typically time- and resource-intensive, often taking months or even years to complete, as researchers must thoroughly search, evaluate, and synthesize an immense number of scientific studies. For the present article, we conducted a review to understand how new artificial intelligence (AI) tools, specifically large language models (LLMs) like Generative Pretrained Transformer (GPT), can be used to help create systematic reviews in health research. We searched multiple scientific databases and finally found 37 relevant articles. We found that LLMs have been tested to help with various parts of the systematic review process, particularly in 3 main areas: searching scientific literature (41% of studies), selecting relevant studies (38%), and extracting important information from these studies (30%). GPT was the most commonly used LLM, appearing in 89% of the studies. Most of the research (57%) focused on testing whether these AI tools actually work as intended in this context of systematic review production. The results were mixed: about half of the studies found LLMs promising, a quarter were neutral, and one-fifth found them not promising. While LLMs show potential for making the systematic review process more efficient, there is still a lack of fully tested and validated applications. However, the increasing number of studies in this field suggests that these AI tools are becoming increasingly important in creating systematic reviews.
- Research Article
- 10.69554/fmai7138
- Mar 1, 2025
- Advances in Online Education: A Peer-Reviewed Journal
Over the past three decades, the evolution of technology has dramatically reshaped the information landscape, making it easier to access and simultaneously easier to distort. The advent of artificial intelligence (AI), particularly generative tools like ChatGPT and CoPilot, has further complicated the pursuit of information literacy, posing significant challenges for educators, librarians and students alike. This paper explores the implications of integrating generative AI (GenAI) tools into educational and professional settings, emphasising the necessity of critical thinking and the development of robust information literacy skills to discern the credibility and authority of AI-generated content. By examining the Association of College and Research Libraries’ (ACRL) ‘Framework for Information Literacy for Higher Education’, this paper provides strategies to identify risk areas related to AI integration as well as produce use cases for large language model (LLM) GenAI tools, including a flowchart for determining when to make use of GenAI, a toolkit for positive/effective use cases, and a rubric for assessing information literacy and critical thinking. While AI tools can offer valuable educational opportunities, their propensity to generate misleading or inaccurate information necessitates a careful and informed approach to their use. This paper concludes with a call for ongoing vigilance in maintaining academic integrity and underscores the importance of continuously questioning the reliability of AI outputs in educational contexts.
- Research Article
- 10.1371/journal.pone.0336154
- Dec 4, 2025
- PLOS One
BackgroundIn medical education, mentoring and feedback play crucial roles. Providing feedback on exam performance is a vital component as it allows students to improve. Feedback has to be tailor made and specific to the individual student. This needs lot of time and human resources, which are always not in abundance. Use of artificial intelligence (AI) is a promising proposition yet it comes with the integral problem of generating inaccurate responses by the Large language models (LLM). To alleviate and minimize this, we have developed our unique model ‘Sisu Athwala’ using retrieval augment generation (RAG) with custom LLM’s.ObjectiveTo design and implement an AI-based tool using RAG to provide customized feedback to medical students to enhance their exam performance, minimizing the risk of generating inaccurate responses by the LLM’s. To evaluate the AI tool by expert student mentors and by the end users.MethodsThe study was conducted at the Faculty of Medicine, University of Peradeniya, Sri Lanka. An AI based feedback tool was developed powered by Generative Pre-trained Transformers-4 (GPT-4) LLM using a RAG pipeline. Expert instruction sets were used to develop the data base through embedding model to minimize potential inaccuracies and biases. To generate user queries, students were provided with a self-evaluation form which was processed using Representative Vector Summarization (RVS). Hence most critical concerns of each student are distilled and captured accurately, minimizing noise or irrelevant details. The role of the AI tool was defined as a counsellor during Pre-processional alignment allowing professional manner throughout the interaction. User queries were processed using Open AI Application Programming Interface (API), utilizing GPT-4-turbo LLM. Students were invited to engage in conversations with the newly developed feedback tool. The AI tool was evaluated by the expert student mentors, as per its ability to give personalized feedback, use varied language expressions, and to introduce novel perspectives to students. End user perception on the use of AI tool was assessed using a questionnaire.ResultsPost implementation end user survey of the Sisu Athwala AI tool was largely positive. 92% mentioned the advices given by the tool on stress management were helpful. 60% believed that the study techniques suggested were useful. While further 60% thought they are comfortable using the tool. 52% find the advices on exam performances were helpful. In their open comments some suggested to have the tool as a mobile APP. 15 expert student mentors took part in evaluating the tool. 100% agreed that it effectively addressed key points of student strengths and identifies areas for improvements going by the Pendleton model. 90% agreed that Sisu- Athwala gives clear actionable plans.ConclusionSisu Athwala AI tool provided comprehensive tailor made feedback and guidance to medical students which was well received by the end users. Expert student mentors evaluation of the material generated by the AI tool were quite positive. Though this is not a replacement for human mentors it supports mentoring to be delivered circumventing the human resource constraints.
- Front Matter
- 10.1162/artl_e_00409
- May 1, 2023
- Artificial life
Accessible generative artificial intelligence (AI) tools like large-language models (LLMs) (e.g., Chat-GPT, 1 Minerva 2 ) are raising a flurry of questions about the potential and implications of generative algorithms and the ethical use of AI-generated text in a variety of contexts, including open science (Bugbee & Ramachandran, 2023), student assessment (Heidt, 2023), and medicine (Harrer, 2023) . Similarly, among the graphic and visual arts communities, the use of generative image synthesis algorithms (e.g., DALL-E, 3 Midjourney, 4 Stable Diffusion 5 ) that take text prompts as input and produce works in the style of a particular human artist, or no artist who ever lived, are causing consternation and posing challenging questions (Murphy, 2022; Plunkett, 2022) . The use of generative AI to create deep fakes has also been in the spotlight (Ruiter, 2021), as has its role in answering scientific research questions directly (Castelvecchi, 2023) . To our minds, the questions these technologies are raising do not seem to be of a fundamentally different character to questions asked about AI for many years. They largely concern (a) what is possible, (b) what is right, and (c) the implications of the technology's use. For instance,
- Research Article
3
- 10.1111/nyas.15258
- Nov 25, 2024
- Annals of the New York Academy of Sciences
Generative artificial intelligence (AI) raises ethical questions concerning moral and legal responsibility-specifically, the attributions of credit and blame for AI-generated content. For example, if a human invests minimal skill or effort to produce a beneficial output with an AI tool, can the human still take credit? How does the answer change if the AI has been personalized (i.e., fine-tuned) on previous outputs produced without AI assistance by the same human? We conducted a preregistered experiment with representative sampling (N=1802) repeated in four countries (United States, United Kingdom, China, and Singapore). We investigated laypeople's attributions of credit and blame to human users for producing beneficial or harmful outputs with a standard large language model (LLM), a personalized LLM, or no AI assistance (control condition). Participants generally attributed more credit to human users of personalized versus standard LLMs for beneficial outputs, whereas LLM type did not significantly affect blame attributions for harmful outputs, with a partial exception among Chinese participants. In addition, UK participants attributed more blame for using any type of LLM versus no LLM. Practical, ethical, and policy implications of these findings are discussed.
- Research Article
- 10.3390/encyclopedia5040180
- Oct 28, 2025
- Encyclopedia
Artificial Intelligence (AI), particularly Generative AI (GenAI) and Large Language Models (LLMs), is rapidly reshaping higher education by transforming teaching, learning, assessment, research, and institutional management. This entry provides a state-of-the-art, comprehensive, evidence-based synthesis of established AI applications and their implications within the higher education landscape, emphasizing mature knowledge aimed at educators, researchers, and policymakers. AI technologies now support personalized learning pathways, enhance instructional efficiency, and improve academic productivity by facilitating tasks such as automated grading, adaptive feedback, and academic writing assistance. The widespread adoption of AI tools among students and faculty members has created a critical need for AI literacy—encompassing not only technical proficiency but also critical evaluation, ethical awareness, and metacognitive engagement with AI-generated content. Key opportunities include the deployment of adaptive tutoring and real-time feedback mechanisms that tailor instruction to individual learning trajectories; automated content generation, grading assistance, and administrative workflow optimization that reduce faculty workload; and AI-driven analytics that inform curriculum design and early intervention to improve student outcomes. At the same time, AI poses challenges related to academic integrity (e.g., plagiarism and misuse of generative content), algorithmic bias and data privacy, digital divides that exacerbate inequities, and risks of “cognitive debt” whereby over-reliance on AI tools may degrade working memory, creativity, and executive function. The lack of standardized AI policies and fragmented institutional governance highlight the urgent necessity for transparent frameworks that balance technological adoption with academic values. Anchored in several foundational pillars (such as a brief description of AI higher education, AI literacy, AI tools for educators and teaching staff, ethical use of AI, and institutional integration of AI in higher education), this entry emphasizes that AI is neither a panacea nor an intrinsic threat but a “technology of selection” whose impact depends on the deliberate choices of educators, institutions, and learners. When embraced with ethical discernment and educational accountability, AI holds the potential to foster a more inclusive, efficient, and democratic future for higher education; however, its success depends on purposeful integration, balancing innovation with academic values such as integrity, creativity, and inclusivity.
- Research Article
- 10.34190/ecie.19.1.2468
- Sep 20, 2024
- European Conference on Innovation and Entrepreneurship
Marketing scientists as well as practitioners believe that artificial intelligence (AI) holds the promise of productivity gains for organizations. However, there has been little scientific research into these theories. This study investigates the role of AI in enhancing marketing productivity, deriving insights from a case study conducted with the marketing team of an industrial software start-up. Drawing upon Case Study Analysis by Yin (2018) and Participatory Action Research by Kemmis and McTaggart (2007), the study employs a combination of survey interviews, AI tool research and AI tool testings. Key findings indicate that productivity gains are more likely than productivity impairments with the use of marketing AI tools. This effect is even stronger when knowledge workers possess high levels of AI skills and utilize AI tools with suitable capabilities. Having closely analyzed six marketing disciplines, particularly SEO / content and design demonstrated significant productivity gains including generative AI (GAI) tools the team already subscribed to like ChatGPT 4 and Canva, but also new AI solutions. While an AI tool’s level of integration only showed a weak positive productivity impact, future studies are suggested to further investigate this variable by comparing the effects of less advanced but more accessible tools like generative AI versus highly advanced, but less accessible business AI. Having navigated the vast and dynamic landscape of AI tools, insights further emphasize the importance of AI experience sharing and informed decision-making, implying knowledge of own user rights and always staying updated on AI advancements. Zooming out from process level, the work's literature review further highlights the role of environmental and organizational AI enablers, like budget allocation, fostering AI trust and mindset, but also implementing AI routines and responsibilities. Overall, this research underscores the imperative for companies, especially startups and SMEs, to explore AI technology as a means to enhance productivity and gain a competitive edge.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.