Abstract

This paper studies convergence of empirical measures smoothed by a Gaussian kernel. Specifically, consider approximating $P\ast \mathcal {N}_\sigma $ , for $\mathcal {N}_\sigma \triangleq \mathcal {N}(0,\sigma ^{2} \mathrm {I}_{d})$ , by $\hat {P}_{n}\ast \mathcal {N}_\sigma $ under different statistical distances, where $\hat {P}_{n}$ is the empirical measure. We examine the convergence in terms of the Wasserstein distance, total variation (TV), Kullback-Leibler (KL) divergence, and $\chi ^{2}$ -divergence. We show that the approximation error under the TV distance and 1-Wasserstein distance ( $\mathsf {W}_{1}$ ) converges at the rate $e^{O(d)} n^{-1/2}$ in remarkable contrast to a (typical) $n^{-\frac {1}{d}}$ rate for unsmoothed $\mathsf {W}_{1}$ (and $d\ge 3$ ). Similarly, for the KL divergence, squared 2-Wasserstein distance ( $\mathsf {W}_{2}^{2}$ ), and $\chi ^{2}$ -divergence, the convergence rate is $e^{O(d)} n^{-1}$ , but only if $P$ achieves finite input-output $\chi ^{2}$ mutual information across the additive white Gaussian noise (AWGN) channel. If the latter condition is not met, the rate changes to $\omega \left ({n^{-1}}\right)$ for the KL divergence and $\mathsf {W}_{2}^{2}$ , while the $\chi ^{2}$ -divergence becomes infinite – a curious dichotomy. As an application we consider estimating the differential entropy $h(S+Z)$ , where $S\sim P$ and $Z\sim \mathcal {N}_\sigma $ are independent $d$ -dimensional random variables. The distribution $P$ is unknown and belongs to some nonparametric class, but n independently and identically distributed (i.i.d) samples from it are available. Despite the regularizing effect of noise, we first show that any good estimator (within an additive gap) for this problem must have a sample complexity that is exponential in d . We then leverage the above empirical approximation results to show that the absolute-error risk of the plug-in estimator converges as $e^{O(d)} n^{-1/2}$ , thus attaining the parametric rate in n . This establishes the plug-in estimator as minimax rate-optimal for the considered problem, with sharp dependence of the convergence rate both in n and d . We provide numerical results comparing the performance of the plug-in estimator to that of general-purpose (unstructured) differential entropy estimators (based on kernel density estimation (KDE) or k nearest neighbors (kNN) techniques) applied to samples of $S+Z$ . These results reveal a significant empirical superiority of the plug-in to state-of-the-art KDE and kNN methods. As a motivating utilization of the plug-in approach, we estimate information flows in deep neural networks and discuss Tishby’s Information Bottleneck and the compression conjecture, among others.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call