Patients are increasingly turning to the internet, and recently artificial intelligence engines (e.g., ChatGPT), for answers to common medical questions. Regarding orthopedic hand surgery, recent literature has focused on ChatGPT's ability to answer patient frequently asked questions (FAQs) regarding subjects such as carpal tunnel syndrome, distal radius fractures, and more. The present study seeks to determine how accurately ChatGPT can answer patient FAQs surrounding simple fracture patterns such as fifth metacarpal neck fractures. Internet queries were used to identify the ten most FAQs regarding boxer's fractures based on information from five trusted healthcare institutions. These ten questions were posed to ChatGPT 4.0, and the chatbot's responses were recorded. Two fellowship trained orthopedic hand surgeons and one orthopedic hand surgery fellow then graded ChatGPT's responses on an alphabetical grading scale (i.e., A-F); additional commentary was then provided for each response. Descriptive statistics were used to report question, grader, and overall ChatGPT response grades. ChatGPT achieved a cumulative grade of a B, indicating that the chatbot can provide adequate responses with only minor need for clarification when answering FAQs for boxer's fractures. Individual graders provided comparable overall grades of B, B, and B+ respectively. ChatGPT deferred to a medical professional in 7/10 responses. General questions were graded at an A-. Management questions were graded at a C+. Overall, with a grade of B, ChatGPT 4.0 provides adequate-to- complete responses as it pertains to patient FAQs surrounding boxer's fractures.
Read full abstract