Abstract

This study aimed to evaluate the potential of large language models (LLMs) for delivering patient education materials. Two LLMs, Gemini and ChatGPT 3.5, were analysed for their ability to provide clear and understandable information on the topic of blepharitis, a common eye condition. The understandability and actionability of the information provided by the LLMs in response to a set of questions were evaluated using PEMAT, a standardised tool for assessing educational materials. The responses included the important aspects of blepharitis, yet the Flesch-Kincaid readability scores were below the suggested range of 60-70 for patient education materials. Gemini received a score of 38.75, whereas ChatGPT 3.5 earned 26.35, suggesting that the content might be too intricate for the target audience.. These findings suggest that while LLMs have the potential to be informative resources, their current readability levels may limit their effectiveness in providing accessible health information to patients. Further research is needed to explore methods for adapting LLM outputs to ensure clear and concise communication suitable for patient education.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.