Abstract

Arrhythmia refers to an irregular heart rhythm resulting from disruptions in the heart's electrical activity. To identify arrhythmias, an electrocardiogram (ECG) is commonly employed, as it can record the heart's electrical signals. However, ECGs may encounter interference from sources like electromagnetic waves and electrode motion. Several researchers have investigated the denoising of electrocardiogram signals for arrhythmia detection using deep autoencoder models. Unfortunately, these studies have yielded suboptimal results, indicated by low Signal-to-Noise Ratio (SNR) values and relatively large Root Mean Square Error (RMSE). This study addresses these limitations by proposing the utilization of a Deep LSTM Autoencoder to effectively denoise ECG signals for arrhythmia detection. The model's denoising performance is evaluated based on achieved SNR and RMSE values. The results of the denoising evaluations using the Deep LSTM Autoencoder on the AFDB dataset show SNR and RMSE values of 56.16 and 0.00037, respectively. Meanwhile, for the MITDB dataset, the corresponding values are 65.22 and 0.00018. These findings demonstrate significant improvement compared to previous research. However, it's important to note a limitation in this study—the restricted availability of arrhythmia datasets from MITDB and AFDB. Future researchers are encouraged to explore and acquire a more extensive collection of arrhythmia data to further enhance denoising performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call