Abstract
Education level or socio-economic background of patients may dictate their ability to understand medical jargon. Inability to understand primary findings from a radiology report may lead to unnecessary anxiety among patients or missed follow up. We aim to meet this challenge by developing a patient-sensitive summarization model for radiology reports. We selected computed tomography (CT) exams of chest as a use-case and collected 7000 studies from Mayo Clinic. Summarization model was built on top of the T5 large language model (LLM) as our experiments indicated that its text-to-text transfer architecture was suited for abstractive text summarization, resulting in a model with 0.77B trainable parameters. Noisy groundtruth for model training was collected by prompting LLaMA-13B model. We recruited experts (board-certified radiologists) and laymen to manually evaluate model-generated summaries generated by model. Our model rarely missed information as marked by majority opinion of radiologists. Laymen indicated 63% improvement in their understanding by reading model-generated layman summaries. Comparison with zero-shot performance of ChatGPT indicated that the proposed model reduced the rate of hallucination by half and rate of missing important information by fivefold. The proposed model can generate reliable summaries for radiology reports understandable by patients with vastly different levels of medical knowledge.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have