Abstract

Deep Learning based heart sound classification is of significant interest in reducing the burden of manual auscultation through the automated detection of signals, including abnormal heartbeats. This work presents a method for classifying phonocardiogram (PCG) signals as normal or abnormal by applying a deep Convolutional Neural Network (CNN) after transforming the signals into 2D color images. In particular, a new methodology based on fractal theory, which exploits Partitioned Iterated Function Systems (PIFS) to generate 2D color images from 1D signals is presented. PIFS have been extensively investigated in the context of image coding and indexing on account of their ability to interpolate and identify self-similar features in an image. Our classification approach has shown a high potential in terms of noise robustness and does not require any pre-processing steps or an initial segmentation of the signal, as instead happens in most of the approaches proposed in the literature. In this preliminary work, we have carried out several experiments on the database released for the 2016 Physionet Challenge, both in terms of different classification networks and different inputs to the networks, thus also evaluating the data quality. Among all experiments, we have obtained the best result of 0.85 in terms of modified Accuracy (MAcc).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call