Abstract

Improving patient education materials may improve patient outcomes. This study aims to explore the possibility of generating patient education materials with the assistance of a large language model, Chat Generative Pre-Trained Transformer (ChatGPT). In addition, we compare the accuracy and readability of ChatGPT-generated materials versus expert-generated materials. Patient education materials in implant-based breast reconstruction were generated by experts and ChatGPT independently. Readability and accuracy of the materials are the main outcomes. Readability of the materials was compared using Flesch-Kincaid score. Accuracy of the materials generated by ChatGPT was evaluated by 2 independent reviewers. Content errors are categorized into information errors, statistical errors, and multiple errors (errors more than 2 types). The content generated by experts had higher readability. The Flesch-Kincaid score is at the 7.5 grade for expert-generated materials, whereas the content generated by ChatGPT is at the 10.5 grade (despite ChatGPT being asked to generate content at the seventh grade level). The accuracy of ChatGPT-generated content is 50%, with most errors being information errors. ChatGPT often provides information about breast reduction or breast augmentation, despite being asked specifically about breast reconstruction. Despite its limitation, ChatGPT significantly reduced the time required to generate patient education materials. Although it takes experts 1 month to generate patient education materials, ChatGPT generates materials within 30 minutes. ChatGPT can be a powerful starting tool to generate patient education materials. However, its readability and accuracy still require improvements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call