Abstract

Abstract The inverse of normal covariance matrix is called precision matrix and often plays an important role in statistical estimation problem. This paper deals with the problem of estimating the precision matrix under a quadratic loss, where the precision matrix is restricted to a bounded parameter space. Gauss’ divergence theorem with matrix argument shows that the unbiased and unrestricted estimator is dominated by a posterior mean associated with a flat prior on the bounded parameter space. Also, an improving method is given by considering an expansion estimator. A hierarchical prior is shown to improve on the posterior mean. An application is given for a Bayesian prediction in a random-effects model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call