Abstract

Positivity of the prior probability of Kullback-Leibler neighborhood around the true density, commonly known as the Kullback-Leibler property, plays a fundamental role in posterior consistency. A popular prior for Bayesian estimation is given by a Dirichlet mixture, where the kernels are chosen depending on the sample space and the class of densities to be estimated. The Kullback-Leibler property of the Dirichlet mixture prior has been shown for some special kernels like the normal density or Bernstein polynomial, under appropriate conditions. In this paper, we obtain easily verifiable sufficient conditions, under which a prior obtained by mixing a general kernel possesses the Kullback-Leibler property. We study a wide variety of kernel used in practice, including the normal, t, histogram, gamma, Weibull densities and so on, and show that the Kullback-Leibler property holds if some easily verifiable conditions are satisfied at the true density. This gives a catalog of conditions required for the Kullback-Leibler property, which can be readily used in applications.

Highlights

  • Density estimation, which is relevant in various applications such as cluster analysis and robust estimation, is a fundamental nonparametric inference problem

  • In Bayesian approach to density estimation, a prior such as a Gaussian process, a Polya tree process, or a Dirichlet mixture is constructed on the space of probability densities

  • Note that the variable x and the parameters θ, φ and ξ mentioned above are not necessarily one-dimensional. Asymptotic properties, such as consistency, and rate of convergence of the posterior distribution based on kernel mixture priors were established by Ghosal, Ghosh and Ramamoorthi [11], Tokdar [29], and Ghosal and van der Vaart [13; 14], when the kernel is chosen to be a normal probability density

Read more

Summary

Introduction

Density estimation, which is relevant in various applications such as cluster analysis and robust estimation, is a fundamental nonparametric inference problem. Let Φ be the space of φ and supp(μ) denote the support of μ With such a random hyper parameter in the chosen kernel, the prior on densities is induced by μ × Π via the map (φ, P ) → fP,φ(x) := K(x; θ, φ)dP (θ). Note that the variable x and the parameters θ, φ and ξ mentioned above are not necessarily one-dimensional Asymptotic properties, such as consistency, and rate of convergence of the posterior distribution based on kernel mixture priors were established by Ghosal, Ghosh and Ramamoorthi [11], Tokdar [29], and Ghosal and van der Vaart [13; 14], when the kernel is chosen to be a normal probability density (and the prior distribution of the mixing distribution is DP).

General Kernel Mixture Priors
Location scale kernel
Examples
Location-scale kernels
Multivariate normal density kernel
Double-exponential density kernel
Kernels with bounded support
Histogram density kernel Let the kernel function be
Triangular density kernel Let the kernel function be
Lognormal density kernel
10. Weibull density kernel
11. Gamma density kernel
12. Inverse gamma density kernel
13. Exponential density kernel
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.