Abstract

Approximately 20% of emergency department (ED) visits involve cardiovascular symptoms. While ECGs are crucial for diagnosing serious conditions, interpretation accuracy varies among emergency physicians. Artificial intelligence (AI), such as ChatGPT, could assist in ECG interpretation by enhancing diagnostic precision. This single-center, retrospective observational study, conducted at Merano Hospital's ED, assessed ChatGPT's agreement with cardiologists in interpreting ECGs. The primary outcome was agreement level between ChatGPT and cardiologists. Secondary outcomes included ChatGPT's ability to identify patients at risk for Major Adverse Cardiac Events (MACE). Of the 128 patients enrolled, ChatGPT showed good agreement with cardiologists on most ECG segments, excluding T wave (kappa=0.048) and ST segment (kappa=0.267). Significant discrepancies arose in the assessment of critical cases, as ChatGPT classified more patients as at risk for MACE than were identified by physicians. ChatGPT demonstrates moderate accuracy in ECG interpretation, yet its current limitations, especially in assessing critical cases, restrict its clinical utility in ED settings. Future research and technological advancements could enhance AI's reliability, potentially positioning it as a valuable support tool for emergency physicians.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.