Abstract

Calculating the Shannon entropy for symbolic sequences has been widely considered in many fields. For descriptive statistical problems such as estimating the N-gram entropy of English language text, a common approach is to use as much data as possible to obtain progressively more accurate estimates. However, in some instances, only short sequences may be available. This gives rise to the question of how many samples are needed to compute entropy. In this paper, we examine this problem and propose a method for estimating the number of samples required to compute Shannon entropy for a set of ranked symbolic “natural” events. The result is developed using a modified Zipf-Mandelbrot law and the Dvoretzky–Kiefer–Wolfowitz inequality, and we propose an approximation which yields an estimate for the minimum number of samples required to obtain an estimate of entropy with a given confidence level and degree of accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call