Abstract

The aim of this study is to assess the accuracy of ChatGPT in response to oncology exam questions in the setting of one-shot learning. Consecutive national radiation oncology in-service multiple-choice examinations were collected and inputted into ChatGPT 4o and ChatGPT 3.5 to determine ChatGPT’s answers. ChatGPT’s answers were then compared to the answer keys to determine whether ChatGPT correctly or incorrectly answered each question, and to determine if improvements in responses were seen with the newer ChatGPT version. A total of 600 consecutive questions were inputted into ChatGPT. ChatGPT 4o answered 72.2% questions correctly, whereas 3.5 answered 53.8% questions correctly. There was a significant difference in performance by question category (p < 0.01). ChatGPT performed poorer with respect to knowledge of landmark studies and treatment recommendations/planning. ChatGPT is a promising technology, with the latest version showing marked improvement. While it still has limitations, with further evolution, it may be considered a reliable resource for medical training and decision making in the oncology space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call