The American Medical Association (AMA) recommends patient education materials (PEMs) be written at or below a sixth grade reading level. This study seeks to determine the quality, readability, and content of available alveolar bone grafting (ABG) PEMs and determine if artificial intelligence can improve PEM readability. Review of free online PEMs. Online ABG PEMs were retrieved from different authoring body types (hospital/academic center, medical society, or private practice). None. Content was assessed by screening PEMs for specific ABG-related topics. Quality was evaluated with the Patient Education Material Assessment Tool (PEMAT), which has measures of understandability and actionability. Open-access readability software (WebFX) determined readability with Flesch Reading Ease, Flesch-Kincaid Grade Level, and Gunning-Fog Index. PEMs were rewritten with ChatGPT, and readability metrics were reassessed. Quality, readability, and content of ABG PEMs. 34 PEMs were analyzed. Regarding quality, the average PEMAT-understandability score was 67.0 ± 16.2%, almost at the minimum acceptable score of 70.0% (p = 0.281). The average PEMAT-actionability score was low at 33.0 ± 24.1%. Regarding readability, the average Flesch Reading Ease score was 64.6 ± 12.8, categorized as "standard/plain English." The average Flesch-Kincaid Grade Level was 8.0 ± 2.3, significantly higher than AMA recommendations (p < 0.0001). PEM rewriting with ChatGPT improved Flesch-Kincaid Grade Level to 6.1 ± 1.3 (p < 0.0001). Available ABG PEMs are above the recommended reading level, yet ChatGPT can improve PEM readability. Future studies should improve areas of ABG PEMs that are most lacking, such as actionability.
Read full abstract