Burn injuries often require immediate assistance and specialized care for optimal management and outcomes. The emergence of accessible artificial intelligence technology has just recently started being applied to healthcare decision making and patient education. However, its role in clinical recommendations is still under scrutiny. This study aims to evaluate ChatGPT's outputs and the appropriateness of its responses to commonly asked questions regarding acute burn care when compared to the American Burn Association Guidelines. Twelve commonly asked questions were formulated by a fellowship-trained burn surgeon to address the American Burn Association's recommendations on burn injuries, management, and patient referral. These questions were prompted into ChatGPT, and each response was compared with the aforementioned guidelines, the gold standard for accurate and evidence-based burn care recommendations. Three burn surgeons independently evaluated the appropriateness and comprehensiveness of each ChatGPT response based on the guidelines according to the modified Global Quality Score scale. The average score for ChatGPT-generated responses was 4.56 ± 0.65, indicating the responses were exceptional quality with the most important topics covered and in high concordance with the guidelines. This initial comparison of ChatGPT-generated responses and the American Burn Association guidelines demonstrates that ChatGPT can accurately and comprehensibly describe appropriate treatment and management plans for acute burn injuries. We foresee that ChatGPT may play a role as a complementary tool in medical decision making and patient education, having a profound impact on clinical practice, research, and education.
Read full abstract