Abstract

To enhance the accuracy of heart sound classification, this study aims to overcome the limitations of common models which rely on handcrafted feature extraction. These traditional methods may distort or discard crucial pathological information within heart sounds due to their requirement of tedious parameter settings. We propose a learnable front-end based Efficient Channel Attention Network (ECA-Net) for heart sound classification. This novel approach optimizes the transformation of waveform-to-spectrogram, enabling adaptive feature extraction from heart sound signals without domain knowledge. The features are subsequently fed into an ECA-Net based convolutional recurrent neural network (CRNN), which emphasizes informative features and suppresses irrelevant information. To address data imbalance, Focal loss is employed in our model. Using the well-known public PhysioNet challenge 2016 dataset, our method achieved a classification accuracy of 97.77%, outperforming the majority of previous studies and closely rivaling the best model with a difference of just 0.57%. The learnable front-end facilitates end-to-end training by replacing the conventional heart sound feature extraction module. This provides a novel and efficient approach for heart sound classification research and applications, enhancing the practical utility of end-to-end models in this field.&#xD.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.