Abstract

The optimal management of distal radius fractures remains a challenge for orthopaedic surgeons. The emergence of Artificial Intelligence (AI) and Large Language Models (LLMs), especially ChatGPT, affords significant potential in improving healthcare and research. This study aims to assess the accuracy and consistency of ChatGPT's knowledge in managing distal radius fractures, with a focus on its capability to provide information for patients and assist in the decision-making processes of orthopaedic clinicians. We presented ChatGPT with seven questions on distal radius fracture management over two sessions, resulting in 14 responses. These questions covered a range of topics, including patient inquiries and orthopaedic clinical decision-making. We requested references for each response and involved two orthopaedic registrars and two senior orthopaedic surgeons to evaluate response accuracy and consistency. All 14 responses contained a mix of both correct and incorrect information. Among the 47 cited references, 13% were accurate, 28% appeared to be fabricated, 57% were incorrect, and 2% were correct but deemed inappropriate. Consistency was observed in 71% of the responses. ChatGPT demonstrates significant limitations in accuracy and consistency when providing information on distal radius fractures. In its current format, it offers limited utility for patient education and clinical decision-making.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.