Abstract

We consider the problem of estimating the predictive density in a heteroskedastic Gaussian model under general divergence loss. Based on a conjugate hierarchical set-up, we consider generic classes of shrinkage predictive densities that are governed by location and scale hyper-parameters. For any α-divergence loss, we propose a risk-estimation based methodology for tuning these shrinkage hyper-parameters. Our proposed predictive density estimators enjoy optimal asymptotic risk properties that are in concordance with the optimal shrinkage calibration point estimation results established by Xie, Kou, and Brown (2012) for heteroskedastic hierarchical models. These α-divergence risk optimality properties of our proposed predictors are not shared by empirical Bayes predictive density estimators that are calibrated by traditional methods such as maximum likelihood and method of moments. We conduct several numerical studies to compare the non-asymptotic performance of our proposed predictive density estimators with other competing methods and obtain encouraging results.

Highlights

  • Predictive density estimation is one of the fundamental problems in statistical prediction analysis

  • We demonstrate the benefits of using α-divergence based risk calibrated prdes over empirical Bayes maximum likelihood estimator (EBMLE) or empirical Bayes method of moments (EBMOM) based prdes

  • We establish an upper bound on Dn,α that depends on the L2 norm of the signal strength: gn(θ, η) := max

Read more

Summary

Introduction

Predictive density estimation (prde) is one of the fundamental problems in statistical prediction analysis (see chapters 2, 7 and 10 of Aitchison and Dunsmore, 1975 and chapters 2, 3 and 9 of Geisser, 1993). Our proposed prde possesses the plug-in-dominance properties of the Bayes prde as in Ghosh et al (2008), and obtains the minimal risk among a wide class of shrinkage rules These αpredictive risk optimality properties parallel those established by Xie, Kou, and Brown (2012) for point estimation in heteroskedastic hierarchical models. We establish asymptotic optimality of our proposed predictive methods akin to the point estimation results in Xie, Kou, and Brown (2012) These asymptotic properties are not shared by EBMLE or EBMOM based prdes. Dimension independent non-asymptotic characterizations of the predictive risk of our proposed estimators are provided using maximal inequalities for martingales We compare these comprehensive results for general α-divergence with those of Xu and Zhou (2011) who studied empirical Bayes prde in spherically symmetric homoskedastic Gaussian model under KL loss. The direction of shrinkage and the shape of the optimally shrunken prdes greatly varies as α changes

Predictive Set-Up
Risk Estimation and Hyper-parameter Calibration
Theory Results
Simulation Experiments
Discussion and Future
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call