Optimized interaction with Large Language Models : A practical guide to Prompt Engineering and Retrieval-Augmented Generation
Given the increasing number of radiological examinations, large language models (LLMs) offer promising support in radiology. Optimized interaction is essential to ensure reliable results. This article provides an overview of interaction techniques such as prompt engineering, zero-shot learning, and retrieval-augmented generation (RAG) and gives practical tips for their application in radiology. Demonstration of interaction techniques based on practical examples with concrete recommendations for their application in routine radiological practice. Advanced interaction techniques allow task-specific adaptation of LLMs without the need for retraining. The creation of precise prompts and the use of zero-shot and few-shot learning can significantly improve response quality. RAG enables the integration of current and domain-specific information into LLM tools, increasing the accuracy and relevance of the generated content. The use of prompt engineering, zero-shot and few-shot learning, and RAG can optimize interaction with LLMs in radiology. Through these targeted strategies, radiologists can efficiently integrate general chatbots into routine practice to improve patient care.
43
- Mar 1, 1984
- Dimensions in health service
144
- 10.1148/radiol.2522081895
- Jun 9, 2009
- Radiology
58
- 10.1148/radiol.232561
- Nov 1, 2023
- Radiology
99
- 10.1148/radiol.230970
- Jul 1, 2023
- Radiology
2
- 10.48550/arxiv.2410.19385
- Oct 25, 2024
32
- 10.1038/s41598-023-41512-8
- Aug 30, 2023
- Scientific Reports
702
- 10.48550/arxiv.2005.14165
- May 28, 2020
25
- 10.1007/s00402-023-05113-4
- Nov 11, 2023
- Archives of Orthopaedic and Trauma Surgery
317
- 10.1148/radiol.230582
- May 16, 2023
- Radiology
233
- 10.1186/s42492-023-00136-5
- May 18, 2023
- Visual computing for industry, biomedicine, and art
- Research Article
- 10.1007/s41666-025-00190-z
- Feb 20, 2025
- Journal of Healthcare Informatics Research
Information extraction (IE) of unstructured electronic health records is challenging due to the semantic complexity of textual data. Generative large language models (LLMs) offer promising solutions to address this challenge. However, identifying the best training methods to adapt LLMs for IE in residential aged care settings remains underexplored. This research addresses this challenge by evaluating the effects of zero-shot and few-shot learning, both with and without parameter-efficient fine-tuning (PEFT) and retrieval-augmented generation (RAG) using Llama 3.1-8B. The study performed named entity recognition (NER) to nursing notes from Australian aged care facilities (RACFs), focusing on agitation in dementia and malnutrition risk factors. Performance evaluation includes accuracy, macro-averaged precision, recall, and F1 score. We used non-parametric statistical methods to compare if the differences were statistically significant. Results show that zero-shot and few-shot learning, whether combined with PEFT or RAG, achieve comparable performance across the clinical domains when the same prompting template is used. Few-shot learning significantly outperforms zero-shot learning when neither PEFT nor RAG is applied. Notably, PEFT significantly improves model performance in both zero-shot and few-shot learning; however, RAG significantly improves performance only in few-shot learning. After PEFT, the performance of zero-shot learning reaches a comparable level with few-shot learning. However, few-shot learning with RAG significantly outperforms zero-shot learning with RAG. We also found a similar level of performance between few-shot learning with RAG and zero-shot learning with PEFT. These findings provide valuable insights for researchers, practitioners, and stakeholders to optimize the use of generative LLMs in clinical IE.
- Conference Article
3
- 10.2118/217671-ms
- Feb 27, 2024
Finding information across multiple databases, formats, and documents remains a manual job in the drilling industry. Large Language Models (LLMs) have proven effective in data-aggregation tasks, including answering questions. However, using LLMs for domain-specific factual responses poses a nontrivial challenge. The expert labor cost for training domain-specific LLMs prohibits niche industries from developing custom question-answering bots. This paper tests several commercial LLMs for information retrieval tasks for drilling data using zero-shot in-context learning. In addition, we studied the model’s calibration using a few-shot multiple-choice drilling questionnaire. To create an LLM benchmark for drilling, we collated the text data from publicly available databases: the Norwegian Petroleum Directorate (NPD), company annual reports, and petroleum glossary. We used a zero-shot learning technique that relies on an LLM’s ability to generate responses for tasks outside its training. We implemented a controlled zero-shot learning "in-context" procedure that sends a user’s query augmented with text data to the LLM as inputs. This implementation encourages the LLM to take the answer from the data while leveraging its pre-trained contextual-learning capability. We evaluated several state-of-the-art generic LLMs available through an API, including G4, G3.5-TI, J2-ultra model, and L2 series. The paper documents the pre-trained LLMs’ ability to provide correct answers and identify petroleum industry jargon from the collated dataset. Our zero-shot in-context learning implementation helps vanilla LLMs provide relevant factual responses for the drilling domain. While each LLM’s performance varies, we have identified models suitable for a drilling chatbot application. In particular, G4 outperformed on all the tasks. This finding suggests that training expensive domain-specific LLMs is not necessary for question-answering tasks in the context of drilling data. We demonstrate the utility of zero-shot in-context learning using pre-trained LLMs for question-answering tasks relevant to the drilling industry. Additionally, we prepared and publicly released the collated datasets from the NPD database and companies’ annual reports to enable results reproducibility and to foster acceleration of language model adoption and development for the subsurface and drilling industries. The petroleum industry may find our solution beneficial for enhancing personnel training and career development. It also offers a method for conducting data analytics and overcoming challenges in retrieving historical well data.
- Research Article
18
- 10.1055/a-2264-5631
- Feb 26, 2024
- RoFo : Fortschritte auf dem Gebiete der Rontgenstrahlen und der Nuklearmedizin
Large language models (LLMs) such as ChatGPT have shown significant potential in radiology. Their effectiveness often depends on prompt engineering, which optimizes the interaction with the chatbot for accurate results. Here, we highlight the critical role of prompt engineering in tailoring the LLMs' responses to specific medical tasks. Using a clinical case, we elucidate different prompting strategies to adapt the LLM ChatGPT using GPT4 to new tasks without additional training of the base model. These approaches range from precision prompts to advanced in-context methods such as few-shot and zero-shot learning. Additionally, the significance of embeddings, which serve as a data representation technique, is discussed. Prompt engineering substantially improved and focused the chatbot's output. Moreover, embedding of specialized knowledge allows for more transparent insight into the model's decision-making and thus enhances trust. Despite certain challenges, prompt engineering plays a pivotal role in harnessing the potential of LLMs for specialized tasks in the medical domain, particularly radiology. As LLMs continue to evolve, techniques like few-shot learning, zero-shot learning, and embedding-based retrieval mechanisms will become indispensable in delivering tailored outputs. · Large language models might impact radiological practice and decision-masking.. · However, implementation and performance are dependent on the assigned task.. · Optimization of prompting strategies can substantially improve model performance.. · Strategies for prompt engineering range from precision prompts to zero-shot learning.. · Russe MF, Reisert M, Bamberg F et al. Improving the use of LLMs in radiology through prompt engineering: from precision prompts to zero-shot learning . Fortschr Röntgenstr 2024; 196: 1166 - 1170.
- Research Article
- 10.2118/0125-0092-jpt
- Jan 1, 2025
- Journal of Petroleum Technology
_ This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 217671, “Enhancing Information Retrieval in the Drilling Domain: Zero-Shot Learning With Large Language Models for Question Answering,” by Felix J. Pacis, SPE, University of Stavanger, and Sergey Alyaev and Gilles Pelfrene, SPE, NORCE, et al. The paper has not been peer reviewed. _ Finding information across multiple databases, formats, and documents remains a manual job in the drilling industry. Large language models (LLMs) have proven effective in data-aggregation tasks, including answering questions. However, using LLMs for domain-specific factual responses poses a nontrivial challenge. The expert-labor cost for training domain-specific LLMs prohibits niche industries from developing custom question-answering bots. The complete paper tests several commercial LLMs for information-retrieval tasks for drilling data using zero-shot in-context learning. In addition, the model’s calibration is tested with a few-shot multiple-choice drilling questionnaire. Introduction While LLMs have proven effective in various tasks ranging from sentiment analysis to text completion, using LLMs for question-answering tasks presents a challenge in providing factual responses. Pretrained LLMs only serve as a parameterized implicit knowledge base and cannot access recent data; thus, information is bounded by the time of training. Retrieval augmented generation (RAG) can address some of these issues by extending the utility of LLMs to specific data sources. Fig. 1 shows a simplified RAG-based LLM question/answer application. RAG involves two primary components: document retrieval (green boxes), which retrieves the most relevant context based on the query, and LLM response generation (blue boxes). During the response generation, LLM operates based on the prompt, query, and retrieved context without any change in the model parameters, a process the authors term as “in-context learning.” Methodology Two experiments have been conducted: The first one is a few-shot multiple-choice experiment evaluated using the SLB drilling glossary; the second is a zero-shot in-context experiment evaluated on drilling reports and company reports. Multiple-Choice Experiment. SLB Drilling Glossary. For the multiple-choice experiment, a publicly available drilling glossary served as a basis for evaluation. A total of 409 term/definition pairs were considered. Five term/definition pairs were chosen, serving as few-shot default values, while the remaining 404 pairs served as the multiple-choice questions. Four choices were given for each term/definition question pair, where one was the correct answer. The three incorrect choices were picked randomly from all possible terms minus the true answer. Zero-Shot In-Context Experiment. Norwegian Petroleum Directorate (NPD) Database. The authors explored the wellbore history of all individual exploration wells drilled in the Norwegian shelf in the NPD database. In this experiment, 12 exploration wells were randomly chosen for evaluation. In addition to these drilling reports, information about the stratigraphy of three additional wells was added. Annual Reports. Annual reports of two major operators in Norway for 2020 and 2021 also were considered. These consisted of short summaries that presented the main operational and economic results achieved by the company throughout the year. These reports were added to the evaluation to balance the higher technical content of the wellbore-history reports.
- Research Article
- 10.1007/s42452-025-07225-5
- Aug 21, 2025
- Discover Applied Sciences
Zero-shot and few-shot learning techniques in natural language processing (NLP), this comprehensive review traces their evolution from traditional methods to cutting-edge approaches like transfer learning and pre-trained language models, semantic embedding, attribute-based approaches, generative models for data augmentation in zero-shot learning, and meta-learning, model-agnostic meta-learning, relationship networks, model-agnostic meta-learning (MAML), prototypical networks in few-shot learning. Real-world applications underscore the adaptability and efficacy of these techniques across various NLP tasks in both industry and academia. Acknowledging challenges inherent in zero-shot and few-shot learning, this review identifies limitations and suggests avenues for improvement. It emphasizes theoretical foundations alongside practical considerations such as accuracy and generalization across diverse NLP tasks. By consolidating key insights, this review provides researchers and practitioners with valuable guidance on the current state and future potential of zero-shot and few-shot learning techniques in addressing real-world NLP challenges. Looking ahead, this review aims to stimulate further research, fostering a deeper understanding of the complexities and applicability of zero-shot and few-shot learning techniques in NLP. By offering a roadmap for future exploration, it seeks to contribute to the ongoing advancement and practical implementation of NLP technologies across various domains.
- Research Article
8
- 10.1016/j.cogsys.2023.101188
- Nov 30, 2023
- Cognitive Systems Research
Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learning
- Research Article
177
- 10.1109/tip.2018.2861573
- Oct 26, 2017
- IEEE Transactions on Image Processing
Prevalent techniques in zero-shot learning do not generalize well to other related problem scenarios. Here, we present a unified approach for conventional zero-shot, generalized zero-shot and few-shot learning problems. Our approach is based on a novel Class Adapting Principal Directions (CAPD) concept that allows multiple embeddings of image features into a semantic space. Given an image, our method produces one principal direction for each seen class. Then, it learns how to combine these directions to obtain the principal direction for each unseen class such that the CAPD of the test image is aligned with the semantic embedding of the true class, and opposite to the other classes. This allows efficient and class-adaptive information transfer from seen to unseen classes. In addition, we propose an automatic process for selection of the most useful seen classes for each unseen class to achieve robustness in zero-shot learning. Our method can update the unseen CAPD taking the advantages of few unseen images to work in a few-shot learning scenario. Furthermore, our method can generalize the seen CAPDs by estimating seen-unseen diversity that significantly improves the performance of generalized zero-shot learning. Our extensive evaluations demonstrate that the proposed approach consistently achieves superior performance in zero-shot, generalized zero-shot and few/one-shot learning problems.
- Research Article
- 10.1101/2025.02.27.640661
- Mar 3, 2025
- bioRxiv : the preprint server for biology
The fast accumulation of vast pharmacogenomics data of cancer cell lines provide unprecedented opportunities for drug sensitivity prediction (DSP), a crucial prerequisite for the advancement of precision oncology. Recently, Generative Large Language Models (LLM) have demonstrated performance and generalization prowess across diverse tasks in the field of natural language processing (NLP). However, the structured format of the pharmacogenomics data poses challenge for the utility of LLM in DSP. Therefore, the objective of this study is multi-fold: to adapt prompt engineering for structured pharmacogenomics data toward optimizing LLM's DSP performance, to evaluate LLM's generalization in real-world DSP scenarios, and to compare LLM's DSP performance against that of state-of-the-science baselines. We systematically investigated the capability of the Generative Pre-trained Transformer (GPT) as a DSP model on four publicly available benchmark pharmacogenomics datasets, which are stratified by five cancer tissue types of cell lines and encompass both oncology and non-oncology drugs. Essentially, the predictive landscape of GPT is assessed for effectiveness on the DSP task via four learning paradigms: zero-shot learning, few-shot learning, fine-tuning and clustering pretrained embeddings. To facilitate GPT in seamlessly processing the structured pharmacogenomics data, domain-specific novel prompt engineering is employed by implementing three prompt templates (i.e., Instruction, Instruction-Prefix, Cloze) and integrating pharmacogenomics-related features into the prompt. We validated GPT's performance in diverse real-world DSP scenarios: cross-tissue generalization, blind tests, and analyses of drug-pathway associations and top sensitive/resistant cell lines. Furthermore, we conducted a comparative evaluation of GPT against multiple Transformer-based pretrained models and existing DSP baselines. Extensive experiments on the pharmacogenomics datasets across the five tissue cohorts demonstrate that fine-tuning GPT yields the best DSP performance (28% F1 increase, p-value= 0.0003) followed by clustering pretrained GPT embeddings (26% F1 increase, p-value= 0.0005), outperforming GPT in-context learning (i.e., few-shot). However, GPT in the zero-shot setting had a big F1 gap, resulting in the worst performance. Within the scope of prompt engineering, performance enhancement was achieved by directly instructing GPT about the DSP task and resorting to a concise context format (i.e., instruction-prefix), leading to F1 performance gain of 22% (p-value=0.02); while incorporation of drug-cell line prompt context derived from genomics and/or molecular features further boosted F1 score by 2%. Compared to state-of-the-science DSP baselines, GPT significantly asserted superior mean F1 performance (16% gain, p-value<0.05) on the GDSC dataset. In the cross-tissue analysis, GPT showcased comparable generalizability to the within-tissue performances on the GDSC and PRISM datasets, while statistically significant F1 performance improvements on the CCLE (8%, p-value=0.001) and DrugComb (19%, p-value=0.009) datasets. Evaluation on the challenging blind tests suggests GPT's competitiveness on the CCLE and DrugComb datasets compared to random splitting. Furthermore, analyses of the drug-pathway associations and log probabilities provided valuable insights that align with previous DSP findings. The diverse experiment setups and in-depth analysis underscore the importance of generative LLM, such as GPT, as a viable in silico approach to guide precision oncology. https://github.com/bioIKEA/SensitiveCancerGPT.
- Research Article
- 10.62408/ai-ling.v1i1.13
- Aug 7, 2024
- AI-Linguistica. Linguistic Studies on AI-Generated Texts and Discourses
This paper investigates the use of ChatGPT, a large language model, for simplifying long sentences and nominal clusters in professional texts belonging to administrative and legal domains. We apply three prompt engineering techniques — zero-shot learning, few-shot learning, and Chain-of-Thought reasoning — to generate alternative sentences from a corpus of Italian texts. We evaluate the generated sentences using a survey with expert and non-expert readers of bureaucratic and legal Italian, focusing on ease of understanding, coherence, and preferences in rephrasing. Our results show that ChatGPT can effectively address the linguistic challenges outlined by UNI 11482:2013 Standard, and that complex prompting techniques yield better outcomes than simpler ones. We also discuss the implications of our findings for the optimization of text understanding and simplification using large language models.
- Research Article
4
- 10.1093/bib/bbae354
- Jul 25, 2024
- Briefings in bioinformatics
Large language models (LLMs) are sophisticated AI-driven models trained on vast sources of natural language data. They are adept at generating responses that closely mimic human conversational patterns. One of the most notable examples is OpenAI's ChatGPT, which has been extensively used across diverse sectors. Despite their flexibility, a significant challenge arises as most users must transmit their data to the servers of companies operating these models. Utilizing ChatGPT or similar models online may inadvertently expose sensitive information to the risk of data breaches. Therefore, implementing LLMs that are open source and smaller in scale within a secure local network becomes a crucial step for organizations where ensuring data privacy and protection has the highest priority, such as regulatory agencies. As a feasibility evaluation, we implemented a series of open-source LLMs within a regulatory agency's local network and assessed their performance on specific tasks involving extracting relevant clinical pharmacology information from regulatory drug labels. Our research shows that some models work well in the context of few- or zero-shot learning, achieving performance comparable, or even better than, neural network models that needed thousands of training samples. One of the models was selected to address a real-world issue of finding intrinsic factors that affect drugs' clinical exposure without any training or fine-tuning. In a dataset of over 700000 sentences, the model showed a 78.5% accuracy rate. Our work pointed to the possibility of implementing open-source LLMs within a secure local network and using these models to perform various natural language processing tasks when large numbers of training examples are unavailable.
- Research Article
- 10.3390/fi17050207
- May 5, 2025
- Future Internet
This paper investigates, applies, and evaluates state-of-the-art Large Language Models (LLMs) for the classification of posts from a dark web hackers’ forum into four cyber-security categories. The LLMs applied included Mistral-7B-Instruct-v0.2, Gemma-1.1-7B, Llama-3-8B-Instruct, and Llama-2-7B, with zero-shot learning, few-shot learning, and fine-tuning. The four cyber-security categories consisted of “Access Control and Management”, “Availability Protection and Security by Design Mechanisms”, “Software and Firmware Flaws”, and “not relevant”. The hackers’ posts were also classified and labelled by a human cyber-security expert, allowing a detailed evaluation of the classification accuracy per each LLM and customization/learning method. We verified LLM fine-tuning as the most effective mechanism to enhance the accuracy and reliability of the classifications. The results include the methodology applied and the labelled hackers’ posts dataset.
- Research Article
- 10.55544/ijrah.5.1.24
- Jan 30, 2025
- Integrated Journal for Research in Arts and Humanities
Modern artificial intelligence systems frequently rely on vast amounts of labeled data to achieve robust performance, yet many real-world scenarios suffer from limited data availability. This paper investigates the potential of integrating zero-shot and few-shot learning paradigms with generative AI models to bridge the persistent data gap. Zero-shot learning empowers models to recognize and classify instances from unseen categories by leveraging semantic descriptors, while few-shot learning focuses on adapting models to new classes using only a handful of examples. Generative AI techniques, such as advanced generative adversarial networks and transformer-based models, can synthesize realistic data samples that mimic complex distributions found in natural environments. By combining these approaches, our methodology offers a dual advantage: it not only enhances model generalization across diverse tasks but also mitigates the challenges posed by data scarcity. We demonstrate the effectiveness of this hybrid framework through experiments in domains including computer vision, natural language processing, and anomaly detection, where traditional data collection is prohibitive. Our analysis reveals that the strategic use of generated data significantly boosts learning outcomes, even when initial training samples are sparse. Furthermore, the adaptability of the proposed system makes it suitable for dynamic, real-world applications where new categories continuously emerge. Overall, this study provides a comprehensive overview of leveraging generative AI to enhance zero-shot and few-shot learning, paving the way for more resilient and scalable solutions in environments constrained by limited data resources. These innovations promise to reshape the future of machine learning by opening new pathways for robust AI development.
- Research Article
5
- 10.1177/00491241251325243
- Apr 24, 2025
- Sociological Methods & Research
Large language models (LLMs) have tremendous potential for social science research as they are trained on vast amounts of text and can generalize to many tasks. We explore the use of LLMs for supervised text classification, specifically the application to stance detection, which involves detecting attitudes and opinions in texts. We examine the performance of these models across different architectures, training regimes, and task specifications. We compare 10 models ranging in size from tens of millions to hundreds of billions of parameters and test four distinct training regimes: Prompt-based zero-shot learning and few-shot learning, fine-tuning, and instruction-tuning, which combines prompting and fine-tuning. The largest, most powerful models generally offer the best predictive performance even with little or no training examples, but fine-tuning smaller models is a competitive solution due to their relatively high accuracy and low cost. Instruction-tuning the latest generative LLMs expands the scope of text classification, enabling applications to more complex tasks than previously feasible. We offer practical recommendations on the use of LLMs for text classification in sociological research and discuss their limitations and challenges. Ultimately, LLMs can make text classification and other text analysis methods more accurate, accessible, and adaptable, opening new possibilities for computational social science.
- Research Article
- 10.1609/aaai.v39i1.32046
- Apr 11, 2025
- Proceedings of the AAAI Conference on Artificial Intelligence
Automated Program Repair (APR) for introductory programming assignments (IPAs) is motivated by the large number of student enrollments in programming courses each year. Since providing feedback on programming assignments requires substantial time and effort from faculty, personalized automated feedback often involves suggesting repairs to students' programs. Symbolic semantic repair approaches, which rely on Formal Methods (FM), check a program's execution against a test suite or reference solution, are effective but limited. These tools excel at identifying buggy parts but can only fix programs if the correct implementation and the faulty one share the same control flow graph. Conversely, Large Language Models (LLMs) are used for program repair but often make extensive rewrites instead of minimal adjustments. This tends to lead to more invasive fixes, making it harder for students to learn from their mistakes. In summary, LLMs excel at completing strings, while FM-based fault localization excel at identifying buggy parts of a program. In this paper, we propose a novel approach that combines the strengths of both FM-based fault localization and LLMs, via zero-shot learning, to enhance APR for IPAs. Our method uses MaxSAT-based fault localization to identify buggy parts of a program, then presents the LLM with a program sketch devoid of these buggy statements. This hybrid approach follows a Counterexample Guided Inductive Synthesis (CEGIS) loop to iteratively refine the program. We ask the LLM to synthesize the missing parts, which are then checked against a test suite. If the suggested program is incorrect, a counterexample from the test suite is fed back to the LLM for revised synthesis. Our experiments on 1,431 incorrect student programs show that our counterexample guided approach, using MaxSAT-based bug-free program sketches, significantly improves the repair capabilities of all six evaluated LLMs. This method allows LLMs to repair more programs and produce smaller fixes, outperforming other configurations and state-of-the-art symbolic program repair tools.
- Book Chapter
4
- 10.4018/979-8-3693-1822-5.ch007
- Apr 5, 2024
Essential to the development of AI and machine learning, this chapter explores the complex areas of few-shot and zero-shot learning. There have been great advancements towards more efficient and adaptive AI systems with few-shot learning and zero-shot learning, respectively, which can learn from minimal data and infer from particular data instances without previous exposure. Nevertheless, there are several limits and difficulties associated with these procedures. This chapter delves deeply into the theoretical foundations of both techniques, explaining how they work and what problems they solve in different ways. It examines the semantic gap, domain adaptation problems, and model bias, as well as the computational restrictions, overfitting, and model generalizability that are intrinsic to few-shot learning and zero-shot learning, respectively. We may better understand the ideas' potential use in different real-world contexts by comparing and contrasting them.
- New
- Research Article
- 10.1007/s00117-025-01522-1
- Nov 5, 2025
- Radiologie (Heidelberg, Germany)
- New
- Research Article
- 10.1007/s00117-025-01534-x
- Nov 4, 2025
- Radiologie (Heidelberg, Germany)
- New
- Research Article
- 10.1007/s00117-025-01525-y
- Nov 4, 2025
- Radiologie (Heidelberg, Germany)
- New
- Research Article
- 10.1007/s00117-025-01517-y
- Nov 1, 2025
- Radiologie (Heidelberg, Germany)
- New
- Research Article
- 10.1007/s00117-025-01515-0
- Nov 1, 2025
- Radiologie (Heidelberg, Germany)
- New
- News Article
- 10.1007/s00117-025-01530-1
- Nov 1, 2025
- Radiologie (Heidelberg, Germany)
- New
- Front Matter
- 10.1007/s00117-025-01519-w
- Nov 1, 2025
- Radiologie (Heidelberg, Germany)
- New
- Research Article
- 10.1007/s00117-025-01524-z
- Nov 1, 2025
- Radiologie (Heidelberg, Germany)
- New
- Research Article
- 10.1007/s00117-025-01502-5
- Nov 1, 2025
- Radiologie (Heidelberg, Germany)
- New
- Research Article
- 10.1007/s00117-025-01508-z
- Nov 1, 2025
- Radiologie (Heidelberg, Germany)
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.