Abstract

Bubble Entropy is a new metric aiming at the quantification of the entropy of a series, the most important property of which is the total elimination of the scale parameter (e.g., r when computing Sample Entropy) and the low dependency on the length of the runs compared (e.g., the m parameter). In this paper, we compare the tolerance of Bubble Entropy to spikes and compare it to that of Sample Entropy. We use RR series publicly available on Physionet and compute Sample and Bubble Entropy before and after artificially added spikes. We add N spikes of value a · std, where a is a parameter and std the standard deviation of the signal. We compute then the relative error (absolute error / known value). According to our experiments, Bubble Entropy exhibits a remarkable tolerance to spikes. In all our experiments, Bubble Entropy presents a low relative error. Compared to Sample Entropy, the error reported by Bubble Entropy is always smaller and sometimes remarkably (p C 0.001, Wilcoxon rank-sum test). The mean relative error for Sample Entropy ranges from 0.084 to 0.435, while the mean relative error for Bubble Entropy from 0.069 to 0.122. Sample over Bubble Entropy relative errors ratios reach up to 3.57.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call