Abstract

This paper proposes a two-stage maximum entropy prior to elicit uncertainty regarding a multivariate interval constraint of the location parameter of a scale mixture of normal model. Using Shannon’s entropy, this study demonstrates how the prior, obtained by using two stages of a prior hierarchy, appropriately accounts for the information regarding the stochastic constraint and suggests an objective measure of the degree of belief in the stochastic constraint. The study also verifies that the proposed prior plays the role of bridging the gap between the canonical maximum entropy prior of the parameter with no interval constraint and that with a certain multivariate interval constraint. It is shown that the two-stage maximum entropy prior belongs to the family of rectangle screened normal distributions that is conjugate for samples from a normal distribution. Some properties of the prior density, useful for developing a Bayesian inference of the parameter with the stochastic constraint, are provided. We also propose a hierarchical constrained scale mixture of normal model (HCSMN), which uses the prior density to estimate the constrained location parameter of a scale mixture of normal model and demonstrates the scope of its applicability.

Highlights

  • Suppose yi ’s are independent observations from a scale mixture of a p-variate normal distribution with the p × 1 location parameter θ and known scale matrix

  • We suggest an objective measure of uncertainty regarding the stochastic constraint of θ that is accounted for by the two-stage maximum entropy prior

  • The convergence of the Markov chain Monte Carlo (MCMC) sampling algorithm was evident, and a discussion about the convergence will be given in Subsection 5.2; (ii) The estimates of θ obtained from the HCN(πtwo ) and hierarchical constrained multivariate tν (HCtν) models are uniformly closer to the stochastic constraint θ ∈ C than those from the HCN(πmax ) and HCtν models

Read more

Summary

Introduction

Suppose yi ’s are independent observations from a scale mixture of a p-variate normal distribution with the p × 1 location parameter θ and known scale matrix. To the best of our knowledge, a formal method to set up a prior density of θ, consistent with information regarding the moments of the density, as well as the uncertain prior belief on the location parameter, has not previously been investigated in the literature Such practical considerations motivate us to develop a prior density of θ, which is tackled in this paper. As discussed by [17,18,19,20], the entropy has a direct relationship to information theory and measures the amount of uncertainty inherent in the probability distribution Using this property of the entropy, we propose a two-stage hierarchical method for setting up the two-stage maximum entropy prior density of θ.

Maximum Entropy Prior
Two-Stage Maximum Entropy Prior
Case 1
Case 2
Case 3
Objective Measure of Uncertainty
Properties of the Entropy
Posterior Distribution
Hierarchical Constrained Scale Mixture of Normal Model
The Hierarchical Model
The Gibbs Sampler
Markov Chain Monte Carlo Sampling Scheme
Bayes Estimation
Numerical Illustrations
Simulation Study
Car Body Assembly Data Example
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call