Abstract

The combination of Matrix-Assisted Laser Desorption/Ionization Time-of-Flight (MALDI-TOF) spectra data and artificial intelligence (AI) has been introduced for rapid prediction on antibiotic susceptibility testing (AST) of Staphylococcus aureus. Based on the AI predictive probability, cases with probabilities between the low and high cut-offs are defined as being in the “grey zone”. We aimed to investigate the underlying reasons of unconfident (grey zone) or wrong predictive AST. In total, 479 S. aureus isolates were collected and analyzed by MALDI-TOF, and AST prediction and standard AST were obtained in a tertiary medical center. The predictions were categorized as correct-prediction group, wrong-prediction group, and grey-zone group. We analyzed the association between the predictive results and the demographic data, spectral data, and strain types. For methicillin-resistant S. aureus (MRSA), a larger cefoxitin zone size was found in the wrong-prediction group. Multilocus sequence typing of the MRSA isolates in the grey-zone group revealed that uncommon strain types comprised 80%. Of the methicillin-susceptible S. aureus (MSSA) isolates in the grey-zone group, the majority (60%) comprised over 10 different strain types. In predicting AST based on MALDI-TOF AI, uncommon strains and high diversity contribute to suboptimal predictive performance.

Highlights

  • Artificial intelligence (AI) has been successfully applied in a variety of medical practices, with faster diagnostic speed and similar accuracy compared to expert judgements [1]

  • We aimed to investigate factors associated with unconfident prediction or wrong prediction

  • The results revealed that methicillin-susceptible S. aureus (MSSA) isolated from respiratory tract specimens tended to have correct antibiotic susceptibility testing (AST) prediction for oxacillin (Table 1)

Read more

Summary

Introduction

The confidence of prediction never reaches 100% [2]. The predictive uncertainty might have multiple sources, such as missing information, bias, noise, and dataset shift [3]. In medical AI, especially for life-critical decision making, reporting the uncertainty of prediction is required [3,4,5]. A key to medical AI success is to calibrate human trust by providing a confidence score in the model on a case-by-case basis [5,6]. By providing the uncertainties to decision makers, the abilities of machines and humans are combined and the prediction performance can be enhanced [2,3,5].

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call