Abstract

Appropriate patient education and preparation prior to surgery represent a fundamental step in managing expectations, avoiding unnecessary encounters and eventually achieving optimal outcomes. Thus, the objective of this study is to evaluate ChatGPT's potential as a viable source for patient education by comparing its responses and provided references to frequently asked questions on body contouring, with Google's. A Google search was conducted on July 15th, 2023, using the search term "body contouring surgery". The first 15 questions under the "People also ask" section and answers provided by Google were recorded. The 15 questions were then asked to ChatGPT-3.5. Four plastic surgeons evaluated the answers from 1 to 5 according to the Global Quality Scale. The mean score for responses given by Google was 2.55±1.29, indicating poor quality but some information present, of very limited use to patients. The mean score for responses produced by ChatGPT was 4.38±0.67, suggesting that the content was of good quality, useful to patients, and encompassed the most important topics. The difference was statistically significant (p=0.001). Deficiencies in providing references represent one of the most evident weaknesses of ChatGPT. However, ChatGPT did not appear to spread misinformation, and the content of the generated responses was deemed of good quality and useful to patients. The integration of AI technology as a source for patient education has the potential to optimize patient queries on body contouring questions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call