Abstract

This paper proposes a novel framework for lung sound event detection, segmenting continuous lung sound recordings into discrete events and performing recognition of each event. We propose the use of a multi-branch TCN architecture and exploit a novel fusion strategy to combine the resultant features from these branches. This not only allows the network to retain the most salient information across different temporal granularities and disregards irrelevant information, but also allows our network to process recordings of arbitrary length. The proposed method is evaluated on multiple public and in-house benchmarks, containing irregular and noisy recordings of the respiratory auscultation process for the identification of auscultation events including inhalation, crackles, and rhonchi. Moreover, we provide an end-to-end model interpretation pipeline. Our analysis of different feature fusion strategies shows that the proposed feature concatenation method leads to better suppression of non-informative features, which drastically reduces the classifier overhead resulting in a robust lightweight network. Lung sound event detection is a primary diagnostic step for numerous respiratory diseases. The proposed method provides a cost-effective and efficient alternative to exhaustive manual segmentation, and provides more accurate segmentation than existing methods. The end-to-end model interpretability helps to build the required trust in the system for use in clinical settings.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.