We consider the problem of estimating the $L_1$ distance between two discrete probability measures $P$ and $Q$ from empirical data in a nonasymptotic and large alphabet setting. When $Q$ is known and one obtains $n$ samples from $P$, we show that for every $Q$, the minimax rate-optimal estimator with $n$ samples achieves performance comparable to that of the maximum likelihood estimator (MLE) with $n\ln n$ samples. When both $P$ and $Q$ are unknown, we construct minimax rate-optimal estimators whose worst case performance is essentially that of the known $Q$ case with $Q$ being uniform, implying that $Q$ being uniform is essentially the most difficult case. The \emph{effective sample size enlargement} phenomenon, identified in Jiao \emph{et al.} (2015), holds both in the known $Q$ case for every $Q$ and the $Q$ unknown case. However, the construction of optimal estimators for $\|P-Q\|_1$ requires new techniques and insights beyond the approximation-based method of functional estimation in Jiao \emph{et al.} (2015).
Read full abstract