Abstract

Although the impact of interviewers on survey measurement has been studied for more than 85 years, such impacts are rarely considered in the analysis of health surveys. This issue is particularly important with single-stage surveys such as that used in the Behavioral Risk Factor Surveillance System (BRFSS), where there are no sampling clusters in which interviewer effects could be captured. The BRFSS involves an ongoing telephone survey of the health behaviors of US adults and was established in 1984 by the Centers for Disease Control and Prevention. Public-use BRFSS data files are widely used by epidemiologists and public health researchers to describe the health behaviors of adults in the United States. Since its onset, the BRFSS has provided identification codes for telephone interviewers completing BRFSS interviews in its public-use data files; however, a review of BRFSS publications shows no evidence that these codes have been used in estimating standard errors. In this paper we analyze data from the 2012 BRFSS, illustrate both design-based and model-based approaches to incorporating interviewer effects in variance estimation, and find evidence of substantial interviewer effects for 5 key estimates across states. These results suggest that BRFSS analysts should consider accounting for interviewer effects, and we provide example code enabling analysts to do so. We conclude with suggestions regarding possible directions for future research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call