Abstract
BackgroundChatGPT is becoming very popular as an information source for the public. The adequacy of ChatGPT generated patient counseling material has not yet been extensively assessed. MethodsChatGPT was presented with perioperative counseling and complication questions regarding five different procedure, and accuracy of responses was assessed. The chat was then asked to present an explanation of each procedure, and quality of the responses were compared to online educational material. ResultsChatGPT responses were comprehensive when discussing counseling points commonly discussed by a provider prior to a procedure. Responses to questions on surgical complications were less accurate and comprehensive. In comparison to online educational material, ChatGPT scored at or above the median SAM and PEMAT scores for all procedures. ConclusionsChatGPT did well addressing basic counseling points during the perioperative period, although it did not perform as well when addressing surgical complications. Chat response quality was comparable to currently available online educational material.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have