Abstract

We introduce the generic Bayesian model-fitting software WinBUGS, OpenBUGS, and JAGS, and give many examples for how WinBUGS and JAGS can be run from R using the R packages R2WinBUGS, jagsUI, and rjags. We do so by fitting some of the simple and mostly nonhierarchical models from Chapter 3—i.e., linear models and generalized linear models (GLMs), and two simple kinds of hierarchical models (HMs): random-effects and mixed. Specifically, we fit normal, Poisson, and binomial GLMs: a normal-error multiple linear regression as well as several analysis-of-covariance (ANCOVA) models for normal, Poisson, and binomial responses. As simple examples of HMs, we fit conventional Poisson and binomial GLMs with random effects—i.e., generalized linear mixed models (GLMMs). For comparison, we also fit all models using classical (non-Bayesian) least squares, maximum, or restricted maximum likelihood methods. We emphasize that with reasonable sample sizes and vague priors, Bayesian and frequentist inferences are typically extremely similar numerically. We illustrate the power of Bayesian posterior inference based on MCMC samples of the joint posterior distribution in order to make probability statements about any unknown quantities: parameters, latent variables, functions of parameters or latent variables, and predictions. We illustrate Bayesian model checking based on residuals. Throughout, we emphasize predictions as a highly useful summary of a fitted model—i.e., the expected values of a response for specific values of the explanatory variables, along with a full uncertainty assessment. Predictions are of great practical importance for understanding the meaning of a model's parameters and communicating the results from an analysis; yet, they are often challenging to understand for ecologists, especially for GLMs and when covariates have been transformed. We show what to do in a Bayesian analysis when there are missing values in the response or covariates, and how to compute the proportion of variance explained (R2). Finally, we illustrate Bayesian goodness-of-fit assessments based on posterior predictive distributions, which are special kinds of predictions, “replicate” data sets under the same model, and compare them with the frequentist parametric bootstrap.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call