Abstract

Consider an estimate $\theta^*$ of a parameter $\theta$ based on repeated observations from a family of densities $f_\theta$ evaluated by the Kullback–Leibler loss function $K(\theta, \theta^*) = \int \log(f_\theta/f_{\theta^*})f_\theta$. The maximum likelihood prior density, if it exists, is the density for which the corresponding Bayes estimate is asymptotically negligibly different from the maximum likelihood estimate. The Bayes estimate corresponding to the maximum likelihood prior is identical to maximum likelihood for exponential families of densities. In predicting the next observation, the maximum likelihood prior produces a predictive distribution that is asymptotically at least as close, in expected truncated Kullback–Leibler distance, to the true density as the density indexed by the maximum likelihood estimate. It frequently happens in more than one dimension that maximum likelihood corresponds to no prior density, and in that case the maximum likelihood estimate is asymptotically inadmissible and may be improved upon by using the estimate corresponding to a least favorable prior. As in Brown, the asymptotic risk for an arbitrary estimate “near” maximum likelihood is given by an expression involving derivatives of the estimator and of the information matrix. Admissibility questions for these “near ML” estimates are determined by the existence of solutions to certain differential equations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.