Abstract
Disease incidence or disease mortality rates for small areas are often displayed on maps. Maps of raw rates, disease counts divided by the total population at risk, have been criticized as unreliable due to non-constant variance associated with heterogeneity in base population size. This has led to the use of model-based Bayes or empirical Bayes point estimates for map creation. Because the maps have important epidemiological and political consequences, for example, they are often used to identify small areas with unusually high or low unexplained risk, it is important that the assumptions of the underlying models be scrutinized. We review the use of posterior predictive model checks, which compare features of the observed data to the same features of replicate data generated under the model, for assessing model fitness. One crucial issue is whether extrema are potentially important epidemiological findings or merely evidence of poor model fit. We propose the use of the cross-validation posterior predictive distribution, obtained by reanalyzing the data without a suspect small area, as a method for assessing whether the observed count in the area is consistent with the model. Because it may not be feasible to actually reanalyze the data for each suspect small area in large data sets, two methods for approximating the cross-validation posterior predictive distribution are described.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.