Abstract
We develop a simple Quantile Spacing (QS) method for accurate probabilistic estimation of one-dimensional entropy from equiprobable random samples, and compare it with the popular Bin-Counting (BC) and Kernel Density (KD) methods. In contrast to BC, which uses equal-width bins with varying probability mass, the QS method uses estimates of the quantiles that divide the support of the data generating probability density function (pdf) into equal-probability-mass intervals. And, whereas BC and KD each require optimal tuning of a hyper-parameter whose value varies with sample size and shape of the pdf, QS only requires specification of the number of quantiles to be used. Results indicate, for the class of distributions tested, that the optimal number of quantiles is a fixed fraction of the sample size (empirically determined to be ), and that this value is relatively insensitive to distributional form or sample size. This provides a clear advantage over BC and KD since hyper-parameter tuning is not required. Further, unlike KD, there is no need to select an appropriate kernel-type, and so QS is applicable to pdfs of arbitrary shape, including those with discontinuous slope and/or magnitude. Bootstrapping is used to approximate the sampling variability distribution of the resulting entropy estimate, and is shown to accurately reflect the true uncertainty. For the four distributional forms studied (Gaussian, Log-Normal, Exponential and Bimodal Gaussian Mixture), expected estimation bias is less than 1% and uncertainty is low even for samples of as few as data points; in contrast, for KD the small sample bias can be as large as and for BC as large as . We speculate that estimating quantile locations, rather than bin-probabilities, results in more efficient use of the information in the data to approximate the underlying shape of an unknown data generating pdf.
Highlights
Consider a data generating process p( x ) from which a finite size set of NS random, equiprobable, independent identically distributed samples S = {si, i = 1 . . . NS } is drawn
We show how the overall percentage error in the Quantile Spacing (QS)-based estimate of entropy varies as a function of α = ( NZ /NS ), where α expresses the number of quantiles NZ as a fraction of the sample size NS
The results show that for smaller sample sizes (NS < 500) there is a tendency to overestimate the width of the inter-quartile range, but that this slight positive bias disappears for larger sample sizes
Summary
For a variety of others, closed form solutions are not available, and one can compute H p ( X ) via numerical integration of Equation (1). In all such cases, entropy estimation consists of first obtaining estimates θ |S of the parameters θ of the known parametric density p( x |θ ) and computing the entropy estimate Ĥ p|θ ( X |S) by plugging p x |θinto Equation (1). If the form of p( x |θ ) is “assumed” rather than explicitly known, additional bias will stem from the inadequacy of this assumption
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have