Abstract

Chat Generated Pre-trained Transformer ('ChatGPT,' Open AI, San Francisco, USA) is a free artificial intelligence (AI)-based natural language processing tool that generates complex responses to inputs from users. To determine whether ChatGPT is able to generate high-quality responses to patient-submitted questions in the patient portal. Patient-submitted questions and their corresponding responses from their dermatology physician were extracted from the electronic medical record for analysis. The questions were input into ChatGPT (version 3.5), and the outputs were extracted for analysis, with manual removal of verbiage pertaining to ChatGPT's inability to provide medical advice. Ten blinded reviewers (n=7 physicians, n=3 non-physicians) rated and selected their preference in terms of 'overall quality,' 'readability,' 'accuracy,' 'thoroughness,' and 'level of empathy,' of the physician- and ChatGPT-generated responses. Thirty-one messages and responses were analyzed. The physician-generated response was vastly preferred over the ChatGPT response by both physician and non-physician reviewers and received significantly higher ratings for 'readability' and 'level of empathy.' The results of this study suggest that physician-generated responses to patients' portal messages are still preferred over ChatGPT, but generative AI tools may still be helpful in generating first drafts of responses and education resources for patients.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call