Abstract

Chronic respiratory diseases affect millions and are leading causes of death in the US and worldwide. Pulmonary auscultation provides clinicians with critical respiratory health information through the study of Lung Sounds (LS) and the context of the breathing-phase and chest location in which they are measured. Existing auscultation technologies, however, do not enable the simultaneous measurement of this context, thereby potentially limiting computerized LS analysis. In this work, LS and Impedance Pneumography (IP) measurements were obtained from 10 healthy volunteers while performing normal and forced-expiratory (FE) breathing maneuvers using our wearable IP and respiratory sounds (WIRS) system. Simultaneous auscultation was performed with the Eko CORE stethoscope (EKO). The breathing-phase context was extracted from the IP signals and used to compute phase-by-phase (Inspiratory (I), expiratory (E), and their ratio (I:E)) and breath-by-breath acoustic features. Their individual and added value was then elucidated through machine learning analysis. We found that the phase-contextualized features effectively captured the underlying acoustic differences between deep and FE breaths, yielding a maximum F1 Score of 84.1 ±11.4% with the phase-by-phase features as the strongest contributors to this performance. Further, the individual phase-contextualized models outperformed the traditional breath-by-breath models in all cases. The validity of the results was demonstrated for the LS obtained with WIRS, EKO, and their combination. These results suggest that incorporating breathing-phase context may enhance computerized LS analysis. Hence, multimodal sensing systems that enable this, such as WIRS, have the potential to advance LS clinical utility beyond traditional manual auscultation and improve patient care.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call