Gynecologic cancer requires personalized care to improve outcomes. Large language models (LLMs) hold the potential to provide intelligent question-answering with reliable information about medical queries in clear and plain English, which can be understood by both healthcare providers and patients. We aimed to evaluate two freely available LLMs (ChatGPT and Google's Bard) in answering questions regarding the management of gynecologic cancer. The LLMs' performances were evaluated by developing a set questions that addressed common gynecologic oncologic findings from a patient's perspective and more complex questions to elicit recommendations from a clinician's perspective. Each question was presented to the LLM interface, and the responses generated by the artificial intelligence (AI) model were recorded. The responses were assessed based on the adherence to the National Comprehensive Cancer Network and European Society of Gynecological Oncology guidelines. This evaluation aimed to determine the accuracy and appropriateness of the information provided by LLMs. We showed that the models provided largely appropriate responses to questions regarding common cervical cancer screening tests and BRCA-related questions. Less useful answers were received to complex and controversial gynecologic oncology cases, as assessed by reviewing the common guidelines. ChatGPT and Bard lacked knowledge of regional guideline variations, However, it provided practical and multifaceted advice to patients and caregivers regarding the next steps of management and follow up. We conclude that LLMs may have a role as an adjunct informational tool to improve outcomes.
Read full abstract