Abstract

In the Bayesian framework a standard approach to model criticism is to compare some function of the observed data to a reference predictive distribution. The result of the comparison can be summarized in the form of a p-value, and computation of some kinds of Bayesian predictive p-values can be challenging. The use of regression adjustment approximate Bayesian computation (ABC) methods is explored for this task. Two problems are considered. The first is approximation of distributions of prior predictive p-values for the purpose of choosing weakly informative priors in the case where the model checking statistic is expensive to compute. Here the computation is difficult because of the need to repeatedly sample from a prior predictive distribution for different values of a prior hyperparameter. The second problem considered is the calibration of posterior predictive p-values so that they are uniformly distributed under some reference distribution for the data. Computation is difficult because the calibration process requires repeated approximation of the posterior for different data sets under the reference distribution. In both these problems we argue that high accuracy in the computations is not required, which makes fast approximations such as regression adjustment ABC very useful. We illustrate our methods with several examples.

Highlights

  • We consider Bayesian inference for a parameter θ with prior p(θ), and a parametric model p(y|θ) for data y with observed value yobs

  • An established approach to model criticism in the Bayesian setting involves comparing some function of the observed data to a reference distribution, such as the prior predictive (Box, 1980) or posterior predictive distribution (Guttman, 1967; Rubin, 1984; Gelman et al, 1996)

  • We suggest the use of regression adjustment approximate Bayesian computation (ABC) methods to approximate the simulation step to ease the computational burden

Read more

Summary

Introduction

We consider Bayesian inference for a parameter θ with prior p(θ), and a parametric model p(y|θ) for data y with observed value yobs. Approximation of distributions of conflict p-values for appropriate test statistics for characterizing weak informativity involves repeated sampling from the prior predictive distributions p(S|λ) for a large number of different values λ and this is computationally expensive when simulation of S is expensive. The second main contribution of the paper concerns calibration of posterior predictive p-values in model checking so that they are uniformly distributed under some reference distribution for the data, such as the prior predictive distribution. A further contribution of this paper is to suggest performing this repeated posterior approximation using regression adjustment ABC methods, which is computationally thrifty since it involves only fitting regression models.

Prior and posterior predictive checks
Regression adjustment ABC
Weakly informative prior selection
Regression adjustment for exploring weak informativity
Normal location model
Logistic regression example
Calibration of posterior predictive p-values
The need for calibration
The basic idea
An alternative motivation and some limitations
Capture–recapture example
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call