Abstract
Text summarization in medical domain is one of the most crucial chores as it deals with the critical human information. Consequently the proper summarization and key point extraction from medical deeds using pre-trained Language models is now the key figure to be focused on for the researchers. But due to the considerable amount of real-world data and enormous amount of memory requirement to train the Large Language Models (LLMs), research on these models become challenging. To overcome these challenges multiple prompting and tuning techniques are being used. In this paper, effectiveness of prompt engineering and parameter efficient fine tuning is being studied to summarize the Hospital Discharge Summary (HDS) papers effectively, so that these models can accurately interprete medical terminologies and contexts, generate brief but compact summaries, and draw out concentrated themes, which opens new approaches for the application of LLMs in healthcare and making HDS more patient-friendly. In this research LLaMA 2 (Large Language Model Meta AI) has been considered as the base model. Also, the model has been fine-tuned using QLoRA (Quantized Low Rank Adapters), which can bring down the memory usage of LLMs without compromising the data quality. This study explores the way to use LLMs on HDS datasets without the hassle of memory usage using QLoRA, into electronic health record systems to further streamline the handling and retrieval of healthcare information.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.