Abstract

Interpretation of breath sounds by auscultation has high inter-observer variability, even when performed by trained healthcare professionals. This can be mitigated by using Artificial Intelligence (AI) acoustic analysis. We aimed to develop and validate a novel breath sounds analysis system using AI‐enabled algorithms to accurately interpret breath sounds in children. Subjects from the respiratory clinics and wards were auscultated by two independent respiratory paediatricians blinded to their clinical diagnosis. A novel device consisting of a stethoscope head connected to a smart phone recorded the breath sounds. The audio files were categorised into single label (normal, wheeze and crackles) or multi-label sounds. Together with commercially available breath sounds, an AI classifier was trained using machine learning. Unique features were identified to distinguish the breath sounds. Single label breath sound samples were used to validate the finalised Support Vector Machine classifier. Breath sound samples (73 single label, 20 multi-label) were collected from 93 children (mean age [SD] = 5.40 [4.07] years). Inter-rater concordance was observed in 81 (87.1%) samples. Performance of the classifier on the 73 single label breath sounds demonstrated 91% sensitivity and 95% specificity. The AI classifier developed could identify normal breath sounds, crackles and wheeze in children with high accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call