Abstract

The rise of artificial intelligence (AI) models like ChatGPT offers potential for varied applications, including patient education in healthcare. With gaps in osteoporosis and bone health knowledge and adherence to prevention and treatment, this study aims to evaluate the accuracy of ChatGPT in delivering evidence-based information related to osteoporosis. Twenty of the most common frequently asked questions (FAQs) related to osteoporosis were subcategorized into diagnosis, diagnostic method, risk factors, and treatment and prevention. These FAQs were sourced online and inputted into ChatGPT-3.5. Three orthopedic surgeons and one advanced practice provider who routinely treat patients with fragility fractures independently reviewed the ChatGPT-generated answers, grading them on a scale from 0 (harmful) to 4 (excellent). Mean response accuracy scores were calculated. To compare the variance of the means across the four categories, a one-way analysis of variance (ANOVA) was used. ChatGPT displayed an overall mean accuracy score of 91%. Its responses were graded as "accurate requiring minimal clarification" or "excellent," with a mean response score ranging from 3.25 to 4. No answers were deemed inaccurate or harmful. No significant difference was observed in the means of responses across the defined categories. ChatGPT-3.5 provided high-quality educational content. It showcased a high degree of accuracy in addressing osteoporosis-related questions, aligning closely with expert opinions and current literature, with structured and inclusive answers. However, while AI models can enhance patient information accessibility, they should be used as an adjunct rather than a substitute for human expertise and clinical judgment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call