Abstract

Heart anomalies are an important class of medical conditions from personal, public health and social perspectives and hence accurate and timely diagnoses are important. Heartbeat features two well known amplitude peaks termed S1 and S2. Some sound classification models rely on segmented sound intervals referenced to the locations of detected S1 and S2 peaks, which are often missing due to physiological causes and/or artifacts from sound sampling process. The constituent and combined models we propose are free from segmentation, which consequently is more robust and meritful from reliability aspects. Intuitive phonocardiogram representation with relatively simple deep learning architecture was found to be effective for classifying normal and abnormal heart sounds. A frequency spectrum based deep learning network also produced competitive classification results. When the classification models were merged in one via SVM, performance was seen to improve further. The SVM classification model, comprised of two time domain submodels and a frequency domain submodel, produced 0.9175 sensitivity, 0.8886 specificity and 0.9012 accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.