Abstract

ABSTRACT Background The US Food and Drug Administration (USFDA) communicates new drug safety concerns through drug safety communications (DSCs) and medication guides (MGs), which often challenge patients with average reading abilities due to their complexity. This study assesses whether large language models (LLMs) can enhance the readability of these materials. Methods We analyzed the latest DSCs and MGs, using ChatGPT 4.0© and Gemini© to simplify them to a sixth-grade reading level. Outputs were evaluated for readability, technical accuracy, and content inclusiveness. Results Original materials were difficult to read (DSCs grade level 13, MGs 22). LLMs significantly improved readability, reducing the grade levels to more accessible readings (Single prompt – DSCs: ChatGPT 4.0© 10.1, Gemini© 8; MGs: ChatGPT 4.0© 7.1, Gemini© 6.5. Multiple prompts – DSCs: ChatGPT 4.0© 10.3, Gemini© 7.5; MGs: ChatGPT 4.0© 8, Gemini© 6.8). LLM outputs retained technical accuracy and key messages. Conclusion LLMs can significantly simplify complex health-related information, making it more accessible to patients. Future research should extend these findings to other languages and patient groups in real-world settings.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.