Abstract
299 Background: AI Chatbots, such as ChatGPT, are increasingly being utilized across various aspects of society, including healthcare. Despite this growing usage, their role within the field of oncology remains underdeveloped. The objective of this study is to evaluate ChatGPT’s ability to accurately respond to patient inquiries regarding colon cancer by comparing its responses to assessments from expert clinical oncologists. Methods: Ten comprehensive questions were compiled by reviewing commonly asked questions from reputable sources, including the American Society of Colon & Rectal Surgeons, Mount Sinai, the National Cancer Institute, Mayo Clinic, and the American Cancer Society. The questions were categorized into two categories based on their content: General Oncology Characteristics (covering symptoms, screening, and prevention) and for Diagnosis & Treatment. These questions were then entered into ChatGPT, with prompts designed to simulate patient inquiries. The AI-generated responses were subsequently evaluated by oncology experts using a 5-point Likert scale to assess their accuracy and relevance, with scores reflecting the experts' level of agreement with the answers. Results: On a five-point Likert scale, with 1 representing "strongly disagree" and 5 representing "strongly agree," the mean score was 4.72. ANOVA analysis was performed and there was no statistically significant difference in the mean score across all raters (p = 0.221). However, ratings between the two categories were statistically significant (p=0.034). Conclusions: This study demonstrates that ChatGPT can provide accurate and relevant responses to patient inquiries about colon cancer, as assessed by medical oncology experts. With an average rating of 4.72 on a 5-point Likert scale, ChatGPT’s responses closely align with expert opinion, particularly for general characteristics category which included symptoms, prevention, and screening. However, responses related to diagnosis and treatment showed a statistically significant difference between expert opinions and AI, indicating that the experts agreed less with this component of AI's responses. These findings highlight the potential of AI chatbots in supplementing patient education in oncology, though further research is necessary to explore its limitations and expand its clinical utilities.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have