Abstract

Several model comparison techniques exist to select the best fitting model from a set of candidate models. This study explores the performance of model comparison tools that are commonly used in Bayesian spatial disease mapping and that are available among several Bayesian software packages: the deviance information criterion (DIC), the Watanabe–Akaike information criterion (WAIC) and the log marginal predictive likelihood (LMPL). We compare R packages CARBayes and NIMBLE, and R interfaces to OpenBUGS (R2OpenBUGS) and Stan (RStan), by fitting Poisson models to disease incidence outcomes with intrinsic conditional autoregressive, convolution conditional autoregressive and log-normal error terms. From three data analyses that differ in the number of areal units and background incidence/prevalence of the outcome of interest, we learn that the estimates of model comparison statistics coming from different software packages can lead to disagreements regarding model preference. Furthermore, we show that the distributional convergence of parameter estimates does not necessarily imply numerical convergence of the model comparison tool. We warn users to be careful when doing model comparison when using different software packages, and to make use of one specific method for the calculation of the model selection criteria.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.