Abstract

Auscultation is the most efficient way to diagnose cardiovascular and respiratory diseases. To reach accurate diagnoses, a device must be able to recognize heart and lung sounds from various clinical situations. However, the recorded chest sounds are mixed by heart and lung sounds. Thus, effectively separating these two sounds is critical in the pre-processing stage. Recent advances in machine learning have progressed on monaural source separations, but most of the well-known techniques require paired mixed sounds and individual pure sounds for model training. As the preparation of pure heart and lung sounds is difficult, special designs must be considered to derive effective heart and lung sound separation techniques. In this study, we proposed a novel periodicity-coded deep auto-encoder (PC-DAE) approach to separate mixed heart-lung sounds in an unsupervised manner via the assumption of different periodicities between heart rate and respiration rate. The PC-DAE benefits from deep-learning-based models by extracting representative features and considers the periodicity of heart and lung sounds to carry out the separation. We evaluated PC-DAE on two datasets. The first one includes sounds from the Student Auscultation Manikin (SAM), and the second is prepared by recording chest sounds in real-world conditions. Experimental results indicate that PC-DAE outperforms several well-known separation works in terms of standardized evaluation metrics. Moreover, waveforms and spectrograms demonstrate the effectiveness of PC-DAE compared to existing approaches. It is also confirmed that by using the proposed PC-DAE as a pre-processing stage, the heart sound recognition accuracies can be notably boosted. The experimental results confirmed the effectiveness of PC-DAE and its potential to be used in clinical applications.

Highlights

  • R CENTLY, biological acoustic signals have been enabling various intelligent medical applications

  • Experimental results confirm the effectiveness of periodicity-coded deep auto-encoder (PC-Deep Autoencoder (DAE)) to separate the mixed heart-lung sounds with outperforming related works, including direct-clustering nonnegative matrix factorization (NMF) (DC-NMF) [35], PC-NMF [49], and deep clustering (DC) [45], in terms of three standardized evaluation metrics, qualitative comparisons based on separated waveforms and spectrograms, and heart sound recognition accuracy

  • In addition to the proposed PC-DAE(F) and PC-DAE(C), we tested some well-known approaches for comparison, including direct-clustering NMF (DC-NMF), PC-NMF, and deep clustering based on DAE (DC-DAE)

Read more

Summary

INTRODUCTION

R CENTLY, biological acoustic signals have been enabling various intelligent medical applications. The frequency range of the heart and lung sounds can be highly overlapped This results in interference between the acoustic signals and may degrade the auscultation and. With an increasing demand for various acoustic-signal-based medical applications, effective heart and lung sound separation techniques have become fundamental, challenging. To overcome the mentioned challenges, this paper proposes a periodicity-coded deep autoencoder (PC-DAE) approach, an unsupervised-learning-based mechanism to effectively separate the sounds of heart and lung sources. Experimental results confirm the effectiveness of PC-DAE to separate the mixed heart-lung sounds with outperforming related works, including direct-clustering NMF (DC-NMF) [35], PC-NMF [49], and deep clustering (DC) [45], in terms of three standardized evaluation metrics, qualitative comparisons based on separated waveforms and spectrograms, and heart sound recognition accuracy

RELATED WORKS
THE PROPOSED METHOD
Periodic Analysis Algorithm
Experimental Setups
Latent Space Analysis of a Selected Case
Quantitative Evaluation Based on Source Separation Evaluation Metrics
Qualitative Comparison Based on Separated Waveforms and Spectrograms
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call