Abstract Introduction: Large language models (LLMs), such as OpenAI's ChatGPT-4, are designed to process natural language and generate responses to text-based prompts. While these models have shown promise in addressing clinical and patient-related inquiries, they are not specialized in medicine and lack integration with dedicated medical knowledge databases, leading to potential inaccuracies. On the other hand, healthcare professionals and models explicitly trained in medical texts frequently rely on specialized terminology, which can create significant barriers to clear and patient-friendly communication. This study aims to utilize LLMs to translate complex medical terminology into easy-to-understand explanations, focusing on hematology and oncology fields where communicating medical concepts to the public is particularly challenging. Our objective is to develop a solution that ensures explanations are both accurate and accessible, bridging the gap between technical medical knowledge and patient comprehension. Methods: We curated a dataset of cancer-related terms and their explanations from two sources: the National Cancer Institute (NCI) Dictionary, which provides detailed medical definitions, and simplified explanations based on National Comprehensive Cancer Network (NCCN) guidelines for patients. Using Meta’s LLaMA 7 B-based chat model, we implemented retrieval-augmented generation (RAG) to enable the model to access the NCI Dictionary as needed. To fine-tune the model to generate patient-friendly explanations, we applied LoRA-based supervised fine-tuning (SFT). Readability was measured using the Flesch Reading Ease Score (FRES ) and Dale-Chall Readability Formula (DCRF). Improvements in accessibility were quantified through a two-sample t-test, comparing the mean readability scores of the model's outputs against the baseline. Results: The fine-tuned model demonstrated significant improvements in both accessibility and readability, achieving a 5% and 4% increase in the FRES (baseline: 69.25; output: 72.60, P < 0.01) and DCRF (baseline: 9.50; output: 9.15, P < 0.01), respectively. Preliminary results also indicate the model’s capability to translate entire paragraphs of dense medical text into patient-friendly explanations. Expert validation on a larger scale is currently underway, further solidifying the model's potential to revolutionize patient communication in oncology. This innovative approach sets a new standard for leveraging advanced language models in healthcare education. Conclusion: We successfully trained a large language model specifically designed to simplify complex oncology medical terms into patient-friendly language. By serving as a reliable tool for delivering precise and accessible medical information, it holds the potential to reduce the workload of healthcare providers and enhance patient understanding in clinical settings. Citation Format: Yuqing Wang, Inae Park, Simo Du, Zhengrui Xiao, Yizhou Chen. Simplifying oncology terminology: Leveraging large language models for patient education [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2025; Part 1 (Regular Abstracts); 2025 Apr 25-30; Chicago, IL. Philadelphia (PA): AACR; Cancer Res 2025;85(8_Suppl_1):Abstract nr 7424.
Read full abstract- All Solutions
Editage
One platform for all researcher needs
Paperpal
AI-powered academic writing assistant
R Discovery
Your #1 AI companion for literature search
Mind the Graph
AI tool for graphics, illustrations, and artwork
Unlock unlimited use of all AI tools with the Editage Plus membership.
Explore Editage Plus - Support
Overview
22827 Articles
Published in last 50 years
Related Topics
Articles published on Education Model
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
22252 Search results
Sort by Recency