Abstract

There are many myths regarding Alzheimer's disease (AD) that have been circulated on the internet, each exhibiting varying degrees of accuracy, inaccuracy, and misinformation. Large language models, such as ChatGPT, may be a valuable tool to help assess these myths for veracity and inaccuracy; however, they can induce misinformation as well. This study assesses ChatGPT's ability to identify and address AD myths with reliable information. We conducted a cross-sectional study of attending geriatric medicine clinicians' evaluation of ChatGPT (GPT 4.0) responses to 16 selected AD myths. We prompted ChatGPT to express its opinion on each myth and implemented a survey using REDCap to determine the degree to which clinicians agreed with the accuracy of each of ChatGPT's explanations. We also collected their explanations of any disagreements with ChatGPT's responses. We used a 5-category Likert-type scale with a score ranging from −2 to 2 to quantify clinicians' agreement in each aspect of the evaluation. The clinicians (n = 10) were generally satisfied with ChatGPT's explanations. Among the 16 myths, the clinicians were generally satisfied with these explanations, with (mean [SD] score of 1.1[±0.3]). Most clinicians selected “Agree” or “Strongly Agree” for each statement. Some statements obtained a small number of “Disagree” responses. There were no “Strongly Disagree” responses. Most surveyed health care professionals acknowledged the potential value of ChatGPT in mitigating AD misinformation; however, the need for more refined and detailed explanations of the disease's mechanisms and treatments was highlighted.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call