Abstract
Abstract Context Literature suggests patients with thyroid cancer have unmet informational needs in many aspects of care. Patients often turn to online resources for their health-related information, and generative artificial intelligence programs such as ChatGPT are an emerging and attractive resource for patients. Objective To assess the quality of ChatGPT’s responses to thyroid cancer-related questions. Methods Four endocrinologists and four endocrine surgeons, all with expertise in thyroid cancer, evaluated the responses to 20 thyroid cancer-related questions. Responses were scored on a 7-point Likert scale in areas of accuracy, completeness, and overall satisfaction. Comments from the evaluators were aggregated and a qualitative analysis was performed. Results Overall, only 57%, 56%, and 52% of the responses “agreed” or “strongly agreed” that ChatGPT’s responses were accurate, complete, and satisfactory, respectively. One hundred ninety-eight free-text comments were included in the qualitative analysis. The majority of comments were critical in nature. Several themes emerged, which included overemphasis of diet and iodine intake and its role in thyroid cancer, and incomplete or inaccurate information on risks of both thyroid surgery and radioactive iodine therapy. Conclusion Our study suggests that ChatGPT is not accurate or reliable enough at this time for unsupervised use as a patient information tool for thyroid cancer.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have