The internet and standard search engines are commonly used resources for patients seeking medical information online. With the advancement and increasing usage of artificial intelligence (AI) in health information, online AI chatbots such as ChatGPT may surpass traditional web search engines as the next go-to online resource for medical information. This study aims to assess the ability of ChatGPT to answer frequently asked questions regarding pediatric supracondylar humerus (SCH) fractures. Seven (7) frequently asked questions (FAQs) regarding SCH fractures were presented to ChatGPT. Initial responses were recorded and rated as either "excellent requiring no clarification (0 items need clarification)," "satisfactory requiring minimal clarification (1 to 2 items need clarification)," "satisfactory requiring moderate clarification (3 to 4 items need clarification)," or "unsatisfactory requiring substantial clarification (>4 items need clarification or response contains false information)." While 4 responses met satisfactory ratings with either moderate (2 responses) or minimal (2 responses) clarification, 3 of the 7 FAQs yielded a response from ChatGPT that were unsatisfactory. There were no responses that required no further clarification. ChatGPT provided some satisfactory responses to FAQs regarding pediatric SCH fractures, but required substantial clarification about treatment algorithms, casting and return to sport timelines, and the utility of physical therapy. Therefore, ChatGPT is an unreliable resource for information on treating SCH fractures. Parents of children who experience SCH fractures should continue to communicate with their doctors for the most accurate medical information. Level V-expert opinion on ChatGPT responses.
Read full abstract