Abstract

SummaryA wide variety of statistical methods have been proposed for detecting unusual performance in cross-sectional data on health care providers. We attempt to create a unified framework for comparing these methods, focusing on a clear distinction between estimation and hypothesis testing approaches, with the corresponding distinction between detecting ‘extreme’ and ‘divergent’ performance. When assuming a random-effects model the random-effects distribution forms the null hypothesis, and there appears little point in testing whether individual effects are greater or less than average. The hypothesis testing approach uses p-values as summaries and brings with it the standard problems of multiple testing, whether Bayesian or classical inference is adopted. A null random-effects formulation allows us to answer appropriate questions of the type: ‘is a particular provider worse than we would expect the true worst provider (but still part of the null distribution) to be'? We outline a broad three-stage strategy of exploratory detection of unusual providers, detailed modelling robust to potential outliers and confirmation of unusual performance, illustrated by using two detailed examples. The concepts are most easily handled within a Bayesian analytic framework using Markov chain Monte Carlo methods, but the basic ideas should be generally applicable.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call