Abstract

The Maximum Likelihood Estimator (MLE) is widely used in estimating information measures, and involves “plugging-in” the empirical distribution of the data to estimate a given functional of the unknown distribution. In this work we propose a general framework and procedure to analyze the nonasymptotic performance of the MLE in estimating functionals of discrete distributions, under the worst-case mean squared error criterion. We show that existing theory is insufficient for analyzing the bias of the MLE, and propose to apply the theory of approximation using positive linear operators to study this bias. The variance is controlled using the well-known tools from the literature on concentration inequalities. Our techniques completely characterize the maximum L 2 risk incurred by the MLE in estimating the Shannon entropy H(P) = ∑ i=1 S −p i ln p i , and F α (P) = ∑ i=1 Sp i α up to a multiplicative constant. As a corollary, for Shannon entropy estimation, we show that it is necessary and sufficient to have n ≪ S observations for the MLE to be consistent, where S represents the support size. In addition, we obtain that it is necessary and sufficient to consider n ≪ S1/α samples for the MLE to consistently estimate F α (P); 0 2 rate of convergence for the MLE is n−2(α−1) for infinite support size, while the minimax L 2 rate is (n ln n)−2(α−1). When α ≥ 3/2, the MLE achieves the minimax optimal L 2 convergence rate n−1 regardless of the support size.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call