Abstract

Bayesian modelling enables us to accommodate complex forms of data and make a comprehensive inference, but the effect of partial misspecification of the model is a concern. One approach in this setting is to modularize the model and prevent feedback from suspect modules, using a cut model. After observing data, this leads to the cut distribution which normally does not have a closed form. Previous studies have proposed algorithms to sample from this distribution, but these algorithms have unclear theoretical convergence properties. To address this, we propose a new algorithm called the stochastic approximation cut (SACut) algorithm as an alternative. The algorithm is divided into two parallel chains. The main chain targets an approximation to the cut distribution; the auxiliary chain is used to form an adaptive proposal distribution for the main chain. We prove convergence of the samples drawn by the proposed algorithm and present the exact limit. Although SACut is biased, since the main chain does not target the exact cut distribution, we prove this bias can be reduced geometrically by increasing a user-chosen tuning parameter. In addition, parallel computing can be easily adopted for SACut, which greatly reduces computation time.

Highlights

  • Bayesian models mathematically formulate our beliefs about the data and parameter

  • In a low-dimensional case (d = 1), the time needed to run stochastic approximation cut (SACut) and naive SACut is more than the time needed to run the WinBUGS algorithm and nested MCMC algorithm when the length of internal chain is less than 500, but both the mean square error (MSE) and the Gelman– Rubin statistic are lower when using the SACut algorithm

  • There is only trivial difference in bias between SACut and nested MCMC when nint ≥ 1000, but SACut is significantly faster than nested MCMC

Read more

Summary

Introduction

Bayesian models mathematically formulate our beliefs about the data and parameter. Such models are often highly structured models that represent strong assumptions. Many of the desirable properties of Bayesian inference require the model to be correctly specified. We say a set of models f (x|θ ), where θ ∈ Θ, are misspecified if there is no θ0 ∈ Θ such that data X is independently and identically generated from f (x|θ0) (Walker 2013). Models will inevitably fall short of covering every nuance of the truth.

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call