Abstract

Consider a finite sample from an unknown distribution over a countable alphabet. The missing mass refers to the probability of symbols that do not appear in the sample. Estimating the missing mass is a basic problem in statistics and related fields, which dates back to the early work of Laplace, and the more recent seminal contribution of Good and Turing. In this article, we introduce a generalized Good-Turing (GT) framework for missing mass estimation. We derive an upper-bound for the risk (in terms of mean squared error) and minimize it over the parameters of our framework. Our analysis distinguishes between two setups, depending on the (unknown) alphabet size. When the alphabet size is bounded from above, our risk-bound demonstrates a significant improvement compared to currently known results (which are typically oblivious to the alphabet size). Based on this bound, we introduce a numerically obtained estimator that improves upon GT. When the alphabet size holds no restrictions, we apply our suggested risk-bound and introduce a closed-form estimator that again improves GT performance guarantees. Our suggested framework is easy to apply and does not require additional modeling assumptions. This makes it a favorable choice for practical applications. Supplementary materials for this article are available online.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call