To address patient health literacy, the American Medical Association and the National Institutes of Health recommend that readability of patient education materials should not exceed an eighth grade reading level. However, patient-facing materials often remain above the recommended average reading level. Current online calculators provide readability scores; however, they lack the ability to provide text-specific feedback, which may streamline the process of simplifying patient materials. The purpose of this study was to evaluate Chat Generative Pretrained Transformer (ChatGPT) 3.5 as a tool for optimizing patient-facing hand surgery education materials through reading level analysis and simplification. The readability of 18 patient-facing hand surgery education materials was compared by a traditional online calculator for reading level and ChatGPT 3.5. The original excerpts were then entered into ChatGPT 3.5 and simplified by the artificial intelligence tool. The simplified excerpts were scored by the same calculators. The readability scores for the original excerpts from the online calculator and ChatGPT 3.5 were similar. The simplified excerpts' scores were lower than the originals, with a mean of 7.28, less than the maximum recommended8. The use of ChatGPT 3.5 for the purpose of simplification and readability analysis of patient-facing hand surgery materials is efficient and may help facilitate the conveyance of important health information. ChatGPT 3.5 rendered readability scores comparable with traditional readability calculators, in addition to excerpt-specific feedback. It was also able to simplify materials to the recommended grade levels. By confirming ChatGPT3.5's ability to assess and simplify patient education materials, this study offers a practical solution for potentially improving patient comprehension, engagement, and health outcomes in clinical settings.
Read full abstract