Abstract

Suppose that Xn is a sample of size n with log likelihood nl(θ), where θ is an unknown parameter in Rp having a prior distribution ξ(θ). We need not assume that the sample values are independent or even stationary. Let θˆ be the maximum likelihood estimate (MLE). We show that θ|Xn is asymptotically normal with mean θˆ and covariance −n−1l․,․(θˆ)−1, where l․,․(θ)=∂2l(θ)/∂θ∂θ′. In contrast, θˆ|θ is asymptotically normal with mean θ and covariance n−1[I(θ)]−1, where I(θ)=−E[l․,․(θˆ)|θ] is Fisher’s information. So, frequentist inference conditional on θ cannot be used to approximate Bayesian inference, except for exponential families. However, under mild conditions −l․,․(θˆ)|θ→I(θ) in probability. So, Bayesian inference (that is, conditional on Xn) can be used to approximate frequentist inference. For t(θ) any smooth function, we obtain posterior cumulant expansions, posterior Edgeworth–Cornish–Fisher (ECF) expansions and posterior tilted Edgeworth expansions for Lt(θ)|Xn, as well as confidence regions for t(θ)|Xn of high accuracy. We also give expansions for the Bayes estimate (estimator) of t(θ) about t(θˆ), and for the maximum a posteriori estimate about θˆ, as well as their relative efficiencies with respect to squared error loss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call