Abstract

e13660 Background: Language Learning Models (LLM), are programs that can respond to queries without previous exposure to the input. This is because the programs are actually generative mathematical models trained on hundreds of terabytes of textual data. The output is thus a statistical assessment of the corpus of human generated text likely indicating the correct answer. ChatGPT is an example of an LLM that can understand questions and synthesize responses based on probability. However Chat GPT’s potential to accurately assist physicians with a diagnosis, remains yet to be determined. The goal of this study is to elucidate AI’s potential in proposing both feasible and accurate treatment plans specifically for patients with cancer. Methods: Fifty patients at a medical oncology practice were recruited for this study. For each patient, the physician’s shorthand notes were recorded and organized. ChatGPT was then asked to come up with a potential treatment plan using these notes according to a predetermined set workflow for each individual. The responses were then reviewed by a medical oncologist for accuracy and feasibility via a qualtrics survey. Results: ChatGPT correctly proposed a treatment plan in agreement with the physician for a total of 18 cancer patients allowing for the agreement rate to be 36%. For the rest of the 32 patients, the physician disagreed with the proposed ChatGPT treatment plan on the basis of lack of personalization, inhibitive cost, incorrect recommendation, and greater alternative options. Thirteen patients fell into the ‘Lack of Personalization in Treatment' category, twelve in the ‘Incorrect Recommendation’ category, four in ‘Inhibitive Cost' category, and three in the ‘Greater Alternative Options’ category. Most patients however fell in the category of personalization, suggesting ChatGPT’s unfamiliarity with the subject of generating personal treatment plans. Conclusions: In the current study, ChatGPT was expected to create a treatment plan with only the clinician’s abridged shorthand notes and without any familiarity with the physician’s personal preferences for treatment. This knowledge gap can account for an agreement rate less than 50%. Despite its flaws, ChatGPT was, however, able to generate an adjusted and corrected version of the treatment plan when prompted further outside the set workflow. This highlights the invaluable role of clinical experience in synthesizing an accurate treatment plan. With its encroachment into the medical field, ChatGPT has shown potential in proposing feasible and accurate treatment plans, especially in the oncology clinical setting. Despite the low agreement rate obtained from this study, ChatGPT has shown promise in developing an agreeable treatment plan with the oversight of the attending physician.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call