Abstract

Due to the high sampling rate, the recorded Electrocardiograms (ECG) data are huge. For storing and transmitting ECG data, wide spaces and more bandwidth are therefore needed. The ECG data are also very important to preprocessing and compress so that it is distributed and processed with less bandwidth and less space effectively. This manuscript is aimed at creating an effective ECG compression method. The reported ECG data are processed first in the pre-processing unit (ProUnit) in this method. In this unit, ECG data have been standardized and segmented. The resulting ECG data would then be sent to the Compression Unit (CompUnit). The unit consists of an algorithm for lossy compression (LosyComp), with a lossless algorithm for compression (LossComp). The randomness ECG data is transformed into high randomness data by the failure compression algorithm. The data's high redundancy is then used with the LosyComp algorithm to reach a high compression ratio (CR) with no degradation. The LossComp algorithms recommended in this manuscript are the Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). LossComp algorithms such as Arithmetic Encoding (Arithm) and Run Length Encoding (RLE) are also suggested. To evaluate the proposed method, we measure the Compression Time (CompTime), and Reconstruction Time (RecTime) (T), RMSE and CR. Simulation results suggest the highest output in compression ratio and in complexity by adding RLE after the DCT algorithm. The simulation findings indicate that the inclusion of RLE following the DCT algorithm increases performance in terms of CR and complexity. With CR = 55% with RMSE = 0:14 and above 94% with RMSE = 0:2, DCT as a LossComp algorithm was utilized initially, followed by RLE as a LossComp algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call