Abstract

The power prior is a useful general class of priors that can be used for arbitrary classes of regression models, including generalized linear models, generalized linear mixed models semiparametric survival models with censored data, frailty models, multivariate models, and nonlinear models. The power prior specification for the regression coefficients focuses on observable quantities in that the elicitation is based on historical data, D0, and a scalar quantity, a0, quantifying the heterogeneity between the current data, D, and the historical data D0. The power prior distribution is then constructed by raising the likelihood function of the historical data to the power a0, where 0 ≤ a0 ≤ 1. The scalar a0 is a precision parameter that can be viewed as a measure of compatibility between the historical and current data. In this article we give a formal justification of the power prior and show that it is an optimal class of informative priors in the sense that it minimizes a convex sum of Kullback-Leibler (KL) divergences between two specific posterior densities, in which one density is based on no incorporation of historical data and the other density is based on pooling the historical and current data. This result provides a strong motivation for using the power prior as an informative prior in Bayesian inference. In addition, we derive a formal relationship between this convex sum of KL divergences and the information-processing rules proposed by others. Specifically, we show that the power prior is a 100% efficient information-processing rule in the sense defined earlier. Several examples involving simulations as well as real datasets are examined to demonstrate the proposed methodology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call