Abstract

Let X|μ ∼ N p (μ, u x I) and Y|μ ∼ N p (μ, v y I) be independentp-dimensional multivariate normal vectors with common unknown mean μ. Based on only observing X = x, we consider the problem of obtaining a predictive density p(y|x) for Y that is close to p(y|μ) as measured by expected Kullback-Leibler loss. A natural procedure for this problem is the (formal) Bayes predictive density p U (y|x) under the uniform prior π U (μ) ≡ 1, which is best invariant and minimax. We show that any Bayes predictive density will be minimax if it is obtained by a prior yielding a marginal that is superharmonic or whose square root is superharmonic. This yields wide classes of minimax procedures that dominate p U (y|x), including Bayes predictive densities under superharmonic priors. Fundamental similarities and differences with the parallel theory of estimating a multivariate normal mean under quadratic loss are described.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.