Abstract

Aim: Evaluation of the performance of ChatGPT-4.0 in providing prediagnosis and treatment plans for cardiac clinical cases by expert cardiologists. Methods: 20 cardiology clinical cases developed by experienced cardiologists were divided into two groups according to preparation methods. Cases were reviewed and analyzed by the ChatGPT-4.0 program, and analyses of ChatGPT were then sent to cardiologists. Eighteen expert cardiologists evaluated the quality of ChatGPT-4.0 responses using Likert and Global quality scales. Results: Physicians rated case difficulty (median 2.00), revealing high ChatGPT-4.0 agreement to differential diagnoses (median 5.00). Management plans received a median score of 4, indicating good quality. Regardless of the difficulty of the cases, ChatGPT-4.0 showed similar performance in differential diagnosis (p:0.256) and treatment plans (p:0.951). Conclusion: ChatGPT-4.0 excels at delivering accurate management and demonstrates its potential as a valuable clinical decision support tool in cardiology.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call