Abstract

The sounds made by the heart are regularly used to determine the mechanics and status of blood flow within the heart. While machine-learning algorithms have been presented to classify different heart conditions via auscultation, these models typically output a single-label decision and offer little insight into the magnitude or severity of the classified condition. The inner workings of these networks are a ‘black box’, offering a diagnosis prediction without a metric for explaining how the conclusion was drawn. Here, we present a novel approach, combining deep convolutional neural networks and explainable AI algorithms, to extract key temporal signatures in recordings of multiple heart conditions to enable multi-label classification and severity determination. Signatures found are supported by previous research and offer more information than a simple single-label classification algorithm. The neural networks are able to reach a multi-label classification accuracy of up to 78% (alongside 98% single-label accuracy) after being trained solely on single-label data. This approach may assist physicians in making quick, accurate, and comprehensive diagnoses and provide new insights for the progress of multi-label machine learning for medical diagnosis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.