Abstract

IntroductionCardiac arrest leaves witnesses, survivors, and their relatives with a multitude of questions. When a young or a public figure is affected, interest around cardiac arrest and cardiopulmonary resuscitation (CPR) increases. ChatGPT allows everyone to obtain human-like responses on any topic. Due to the risks of accessing incorrect information, we assessed ChatGPT accuracy in answering laypeople questions about cardiac arrest and CPR. MethodsWe co-produced a list of 40 questions with members of Sudden Cardiac Arrest UK covering all aspects of cardiac arrest and CPR. Answers provided by ChatGPT to each question were evaluated by professionals for their accuracy, by professionals and laypeople for their relevance, clarity, comprehensiveness, and overall value on a scale from 1 (poor) to 5 (excellent), and for readability. ResultsChatGPT answers received an overall positive evaluation (4.3 ± 0.7) by 14 professionals and 16 laypeople. Also, clarity (4.4 ± 0.6), relevance (4.3 ± 0.6), accuracy (4.0 ± 0.6), and comprehensiveness (4.2 ± 0.7) of answers was rated high. Professionals, however, rated overall value (4.0 ± 0.5 vs 4.6 ± 0.7; p = 0.02) and comprehensiveness (3.9 ± 0.6 vs 4.5 ± 0.7; p = 0.02) lower compared to laypeople. CPR-related answers consistently received a lower score across all parameters by professionals and laypeople. Readability was ‘difficult’ (median Flesch reading ease score of 34 [IQR 26–42]). ConclusionsChatGPT provided largely accurate, relevant, and comprehensive answers to questions about cardiac arrest commonly asked by survivors, their relatives, and lay rescuers, except CPR-related answers that received the lowest scores. Large language model will play a significant role in the future and healthcare-related content generated should be monitored.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call