Abstract

ABSTRACT OBJECTIVES: This study assesses the feasibility, inter-rater reliability, and accuracy of using OpenAI's ChatGPT-4 and Google’s Gemini Ultra large language models (LLMs), for Emergency Medical Services (EMS) quality assurance. The implementation of these LLMs for EMS quality assurance has the potential to significantly reduce the workload on medical directors and quality assurance staff by automating aspects of the processing and review of patient care reports. This offers the potential for more efficient and accurate and identification of areas requiring improvement, thereby potentially enhancing patient care outcomes METHODS: Two expert human reviewers, ChatGPT GPT-4, and Gemini Ultra assessed and rated 150 consecutively sampled and anonymized prehospital records from 2 large urban EMS agencies for adherence to 2020 National Association of State EMS metrics for cardiac care. We evaluated the accuracy of scoring, inter-rater reliability, and review efficiency. The inter-rater reliability for the dichotomous outcome of each EMS metric was measured using the kappa statistic. RESULTS: Human reviewers showed high interrater reliability, with 91.2% agreement and a kappa coefficient, 0.782 (0.654-0.910). ChatGPT-4 achieved substantial agreement with human reviewers in EKG documentation and aspirin administration (76.2% agreement, kappa coefficient, 0.401 (0.334-0.468), but performance varied across other metrics. Gemini Ultra’s evaluation was discontinued due to poor performance. No significant differences were observed in median review times: 01:28 minutes (IQR 1:12 - 1:51 min) per human chart review, 01:24 minutes (IQR 01:09 – 01:53 min) per ChatGPT-4 chart review (p = 0.46), and 01:50 minutes (IQR 01:10-03:34 min) per Gemini Ultra review (p = 0.06). CONCLUSIONS: Large language models demonstrate potential in supporting quality assurance by effectively and objectively extracting data elements. However, their accuracy in interpreting non-standardized and time-sensitive details remains inferior to human evaluators. Our findings suggest that current LLMs may best offer supplemental support to the human review processes, but their value remains limited. Enhancements in LLM training and integration are recommended for improved and more reliable performance in the quality assurance processes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call