Abstract

Normalizing constants of conditional distributions include Bayesian marginal likelihoods and likelihoods of mixture models, such as hierarchical models and state-space time-series models. A promising method for estimating such quantities was proposed by Chib and Jeliazkov (CJ) and improved by Mira and Nicholls using bridge sampling results. Here three additional improvements and one theoretical result for the methods of CJ are given. First, a different Metropolis–Hastings proposal density is used for estimating the normalizing constant than for the MCMC run. Second, a ratio of effective sample sizes is incorporated into the optimal bridge function to account for sequential dependence of the MCMC output. Third, the Moving Block Bootstrap is used to estimate the variance of the normalizing constant estimates, which is then minimized with respect to the CJ proposal density and bridge function. It is shown that the optimal proposal density for estimating the normalizing constant, regardless of the proposal density used for the MCMC, is the (unknown) full conditional density. Results from likelihood estimation for a state-space time-series model show that the improvements can decrease the standard error of the log-normalizing constant by an order of magnitude. The methods perform well even for a model that fits the data poorly.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.