Abstract

227 Background: Cancer management requires a multidisciplinary approach, often requiring medical consultation from subspecialists. With the advent of artificial intelligence (AI) technologies like ChatGPT, it is hypothesized that these tools may help expedite the consultation process. This study aimed to assess the efficacy of ChatGPT in providing guideline-based sub-speciality recommendations for managing pts with metastatic GU cancer. Methods: In this single-institution, IRB-approved, retrospective, proof-of-concept study, pts with metastatic GU cancer over the past 3 years were screened. Those with at least one consultation referral to subspecialty clinics were randomly selected. ChatGPT 3.5 was given the most recent clinic note that triggered sub-specialty consultation. The AI tool was then asked to provide an assessment and plan. Two physicians independently assessed the accuracy of diagnoses made by ChatGPT and subspecialty physicians. The primary outcome was the consistency of ChatGPT recommendations with those of subspecialty physicians. Secondary Outcomes included potential time saved by using ChatGPT and comparison of medical decision-making (MDM) complexity levels between ChatGPT and subspecialty physicians. Results: A total of 39 pts were included. Their primary diagnoses included prostate cancer (51.3%), bladder cancer (23.1%) and kidney cancer (15.4%). The referred subspecialty clinic included cardiology (33.3%), hematology (17.9%), hepatology (2.6 %), hospice (10.3%), neurology (12.8%), pulmonary (15.4 %), and rheumatology (7.7%). The average waiting time for pts to be seen in subspecialty clinics was 44.9 days (SD = 42.4). Of 39 patient’s charts reviewed by ChatGPT, 30/39 (76.9%) had the same diagnosis with consultant sub-specialties. The average diagnoses made by ChatGPT were 8.2, compared with 3.4 made by sub-specialty physicians (p < 0.0001). The accuracy of diagnoses made by ChatGPT was the same, higher, and lower than human physicians in 10 (33.3%), 3 (10%), and 17 (56.7%) cases, respectively. Consistency of treatment plans between ChatGPT and physicians was found in 18 cases (46.2%). ChatGPT recommended additional workup in 32 cases (85.1%). The average number of words written in consultation notes by ChatGPT was 362.7 (SD = 72.9), which was significantly greater than subspecialty physicians (n = 224.7, p < 0.0001). Conclusions: These hypothesis-generating data suggest the potential utility of ChatGPT to assist medical oncologists in managing increasingly complex pts with metastatic cancer. Further studies are needed to validate our findings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call