Abstract

109 Background: High literacy levels are needed to understand oncology clinical trial (CT) informed consent forms (ICF), which represents a barrier to enrollment of older adults and diverse populations. ChatGPT-4 is an artificial intelligence chatbot that responds to user prompts and can summarize large amounts of text. We tested whether ChatGPT-4 could simplify CT information from ICFs. Methods: On May 22, 2023, we searched clinicaltrials.gov for interventional, therapeutic, NIH-funded, CTs involving adults with the 14 most prevalent cancer types. Only CTs with available study protocols that were currently recruiting, “enrolling by invitation”, and “active not recruiting” were included. Trials that were diagnostic, preventative, or supportive were excluded. Publicly available ICFs from the resulting CTs were downloaded and analyzed. Using the ChatGPT-4 plugin askyourpdf.com, we asked ChatGPT-4 to review each ICF and answer 8 questions recommended by the NCCN for patients considering a CT in a 6th grade literacy level. Our prompt included the following 8 questions: “1) What are the treatments used in the clinical trial? 2) Has the treatment been used for other types of cancer? 3) What are the risks and benefits of this treatment? 4) What side effects should I expect and how will they be managed? 5) How long will I be in the clinical trial? 6) Will I be able to get other treatment if this doesn’t work? 7) How will you know if the treatment is working? 8) Will the clinical trial cost me anything?” Reading level (readability) was assessed for both the ICFs and ChatGPT-4’s question responses using the validated Flesch-Kincaid (FK), Gunning Fog (GF), and SMOG indices using the online Readable App. Data was summarized with descriptive statistics and t-test was used to compare text reading levels between ICFs and ChatGPT-4’s answers. Results: Our search yielded 83 therapeutic oncology CTs, of which 70 had publicly available ICFs. ChatGPT-4 successfully analyzed 66 of the 70 ICFs (94.3%). The mean text reading levels of its answers were 6.2 (95% CI: 5.9-6.5), 8.6 (95% CI: 8.2-8.9), and 9.2 (95% CI: 8.9-9.4) based on FK, GF, and SMOG indices, respectively. Of 70 ICFs, 54 (77.1%) contained text that could be evaluated for readability analysis and were included in the analysis. The mean text reading levels was 7.9 (95% CI: 7.7-8.1), 9.3 (95% CI: 9.1-9.6), and 10.5 (95% CI: 10.2-10.8) based on FK, GF, and SMOG indices, respectively. ChatGPT-4’s text responses had a significantly lower reading level compared to ICFs text for all three readability indices (FK: p<0.01, GF: p=0.02, SMOG: p<0.01). Conclusions: ChatGPT-4 presented key information from oncology CT ICFs at a 6th to 9th grade reading level, which was significantly lower than the original ICFs. While further studies are needed to assess ChatGPT-4’s accuracy, this study shows its potential as tool for improving patients’ understanding of oncology CTs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call