Abstract

BackgroundAddressing disparities in living kidney donation requires making information accessible across literacy levels, especially important given that the average American adult reads at an 8th-grade level. This study evaluated the effectiveness of ChatGPT, an advanced AI language model, in simplifying living kidney donation information to an 8th-grade reading level or below.MethodsWe used ChatGPT versions 3.5 and 4.0 to modify 27 questions and answers from Donate Life America, a key resource on living kidney donation. We measured the readability of both original and modified texts using the Flesch-Kincaid formula. A paired t-test was conducted to assess changes in readability levels, and a statistical comparison between the two ChatGPT versions was performed.ResultsOriginally, the FAQs had an average reading level of 9.6 ± 1.9. Post-modification, ChatGPT 3.5 achieved an average readability level of 7.72 ± 1.85, while ChatGPT 4.0 reached 4.30 ± 1.71, both with a p-value <0.001 indicating significant reduction. ChatGPT 3.5 made 59.26% of answers readable below 8th-grade level, whereas ChatGPT 4.0 did so for 96.30% of the texts. The grade level range for modified answers was 3.4–11.3 for ChatGPT 3.5 and 1–8.1 for ChatGPT 4.0.ConclusionBoth ChatGPT 3.5 and 4.0 effectively lowered the readability grade levels of complex medical information, with ChatGPT 4.0 being more effective. This suggests ChatGPT's potential role in promoting diversity and equity in living kidney donation, indicating scope for further refinement in making medical information more accessible.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call