Potential drug-drug interactions (pDDIs) pose substantial risks in clinical practice, leading to increased morbidity, mortality, and healthcare costs. Tools like Micromedex drug-drug interaction checker are commonly used to screen for pDDIs, yet emerging AI models, such as ChatGPT, offer the potential for supplementary pDDI prediction. However, the accuracy and reliability of these AI tools in a clinical context remain largely untested. This study evaluates pDDIs in discharge prescriptions for medical ward patients and assesses ChatGPT-4.0's effectiveness in predicting these interactions compared to Micromedex drug-drug interaction checker. A cross-sectional study was conducted over three months with 301 discharged patients. pDDIs were identified using Micromedex drug-drug interaction checker, detailing each interaction's occurrence, severity, onset, and documentation. ChatGPT-4.0 predictions were then analyzed against Micromedex data. Binary logistic regression analysis was applied to assess the influence of predictor variables in the occurrence of pDDIs. 1551 drugs were prescribed to 301 patients, averaging 5.15 per patient. pDDIs were detected in 60.13% of patients, averaging 3.17 pDDIs per patient, ChatGPT-4.0 accurately identified pDDIs (100% for occurrence) but had limited accuracy for severity (37.3%) and moderate accuracy for onset (65.2%). The most frequent major interaction was between Cefuroxime Axetil and Pantoprazole Sodium. Polypharmacy significantly increased the risk of pDDIs (OR: 3.960, p<0.001). pDDIs are prevalent in internal medicine discharge prescriptions, with polypharmacy heightening the risk. While ChatGPT 4.0 accurately identifies pDDI occurrence, its limitations in predicting severity, onset, and documentation underscore healthcare professionals' need for careful oversight.
Read full abstract