Abstract

BackgroundThis study aimed to assess the efficacy of ChatGPT 3.5, an artificial intelligence (AI) language model, in generating readable and accurate layperson's summaries from abstracts of vascular surgery studies. MethodsAbstracts from four leading vascular surgery journals published between October 2023 and December 2023 were used. A ChatGPT prompt for developing layperson's summaries was designed based on established methodology. Readability measures and grade-level assessments were compared between original abstracts and ChatGPT-generated summaries. Two vascular surgeons evaluated a randomized sample of ChatGPT summaries for clarity and correctness. Readability scores of original abstracts were compared with ChatGPT-generated layperson's summaries using a t test. Moreover, a subanalysis based on abstract topics was performed. Cohen's kappa assessed interrater reliability for accuracy and clarity. ResultsOne-hundred fifty papers were included in the database. Statistically significant differences were observed in readability measures and grade-level assessments between original abstracts and AI-generated summaries, indicating improved readability in the latter (mean Global Readability Score of 36.6 ± 13.8 in the original abstract and of 50.5 ± 11.1 in the AI-generated summary; P < .001). This trend persisted across abstract topics and journals. Although one physician found all summaries correct, the other noted inaccuracies in 32% of cases, with mean rating scores of 4.0 and 4.7, respectively, and no interobserver agreement (k value = −0.1). ConclusionsChatGPT demonstrates usefulness in producing patient-friendly summaries from scientific abstracts in vascular surgery, although the accuracy and quality of AI-generated summaries warrant further scrutiny.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call