Abstract

ABSTRACTMultilevel multiple membership models account for situations where lower level units are nested within multiple higher level units from the same classification. Not accounting correctly for such multiple membership structures leads to biased results. The use of a multiple membership model requires selection of weights reflecting the hypothesized contribution of each level two unit and their relationship to the level one outcome. The Deviance Information Criterion (DIC) has been proposed to identify such weights. For the case of logistic regression, this study assesses, through simulation, the model identification rates of the DIC to detect the correct multiple membership weights, and the properties of model variance estimators for different weight specifications across a range of scenarios. The study is motivated by analyzing interviewer effects across waves in a longitudinal study. Interviewers can substantially influence the behavior of sample survey respondents, including their decision to participate in the survey. In the case of a longitudinal survey several interviewers may contact sample members to participate across different waves. Multilevel multiple membership models are suitable to account for the inclusion of higher-level random effects for interviewers at various waves, and to assess, for example, the relative importance of previous and current wave interviewers on current wave nonresponse. To illustrate the application, multiple membership models are applied to the UK Family and Children Survey to identify interviewer effects in a longitudinal study. The paper takes a critical view on the substantive interpretation of the model weights and provides practical guidance to statistical modelers. The main recommendation is that it is best to specify the weights in a multiple membership model by exploring different weight specifications based on the DIC, rather than prespecifying the weights.

Highlights

  • In interviewer-administered surveys interviewers can substantially influence the behavior of respondents, including their response to the survey participation request, and that is the case in both cross-sectional (Blom et al, 2010; Durrant & Steele, 2009; Durrant et al, 2010; Durrant & D’Arrigo, 2014, West and Blom, 2016) and longitudinal surveys (Campanelli & O'Muircheartaigh, 1999; Pickery & Loosveldt, 2002; Pickery et al, 2001; Haunberger, 2010; Lynn et al, 2013; Vassallo, Durrant, Smith and Goldstein, 2015; Vassallo, Durrant and Smith 2017; Brunton-Smith et al 2016)

  • This paper investigates the properties of the variance estimator and the test statistic for multiple membership models when the true multiple membership weights are unknown, as would be the case in a real life situation, and how such properties change depending on the model selection method chosen

  • The models include possible prespecified weights, and models based on the weights identified as giving the best fit by the Deviance Information Criterion (DIC)

Read more

Summary

Introduction

In interviewer-administered surveys interviewers can substantially influence the behavior of respondents, including their response to the survey participation request, and that is the case in both cross-sectional (Blom et al, 2010; Durrant & Steele, 2009; Durrant et al, 2010; Durrant & D’Arrigo, 2014, West and Blom, 2016) and longitudinal surveys (Campanelli & O'Muircheartaigh, 1999; Pickery & Loosveldt, 2002; Pickery et al, 2001; Haunberger, 2010; Lynn et al, 2013; Vassallo, Durrant, Smith and Goldstein, 2015; Vassallo, Durrant and Smith 2017; Brunton-Smith et al 2016).Interviewers influence respondents by introducing the survey concept, engaging the respondent, addressing any queries, and gaining response (Groves & Couper, 1998; Hox & De Leeuw, 2002). The resulting interviewer variability introduces non-zero correlations (or clustering) in the responses among sample units worked on by the same interviewer. These withininterviewer correlations, reduce effective sample sizes, similar to cluster sampling. West and Blom (2016) report that an average interviewer workload of 35 respondents and a withininterviewer correlation of only 0.03 would double the estimated variance of a mean, or what is effectively the same halve the sample size, which stresses the importance of understanding interviewer-level characteristics and other factors that introduce this type of variability in different survey outcomes. A better understanding of such interviewer influences and behaviours is important in helping to reduce nonresponse in surveys before or during data collection and for improving response propensity models. If all the distinct interviewers from both the current and previous waves associated with a case influence the current wave response decision, failing to account for the multiple membership structure will lead to an underestimation of the between interviewer variance (Goldstein, 2011a) with significant biasing effects on parameter estimates in response propensity models (Chung and Beretvas, 2012)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call