In their paper, Christensen et al. 1 used data from a large Danish survey to examine the impact of non-response on estimated rates of morbidity and mortality. The key findings of this study were that non-respondents were at increased risks of all forms of morbidity/mortality when compared with respondents. Furthermore, there was evidence of heterogeneity in morbidity/mortality rates depending on the type of non-response, with those respondents who were non-contactable (n = 1539) having hazards of morbidity/mortality which were 2.51–7.70 times higher than respondents. At first sight, these figures seem alarming, and suggest substantial bias in the sample and a serious mis-estimation of morbidity/mortality rates. These conclusions are, however, somewhat misleading, and the findings are shaped by two factors that affect the interpretation of analyses using hazard rates and tests of significance. The first feature of the study design is that the overall sample (n = 39 540) is large. The use of a large sample implies that even small between-group differences will be found to be significant. The second feature of the design is that the base rates for morbidity/mortality are low, ranging from 0.10% (drug-related mortality) to 12.75% (all-cause mortality). The low base rate of morbidity/mortality implies that relatively small differences in the number of deaths between the respondent and non-respondent populations may generate large hazard rates. These features are illustrated in Table 1, which presents rates of morbidity/mortality in the respondent and each of the non-respondent groups and the total population. These comparisons show that in terms of rates of morbidity/mortality there were only small differences between the respondents and non-respondents. Although it is clear from Table 1 that there were generally small differences between groups in terms of morbidity/mortality, it is also the case that, for some applications, biases caused by using the respondent sample to calculate rates of morbidity/mortality may be of substantive importance. For example, the overall rate of mortality for the total sample was 12 746.58 deaths per 100 000, whereas the estimated rate from the respondent data was 11 249.64 per 100 000, a mortality rate that is 11.7% lower. This suggests that for some applications, such as investigations of mortality in large samples, the use of respondent-only data may lead to conceptually important differences in estimates. One interesting aspect of the analysis of the sources of bias in the present paper is the suggestion that much of the non-respondent group can be accounted for by those who were non-contactable. These findings suggest that in terms of reducing sample selection biases a major priority should be placed on minimizing rates of non-contact. In assigning this priority, however, there are a number of issues that need to be considered. Perhaps the most important issue to consider is the trade-off between overall sample size and risks of non-contact. As a general rule in most survey settings, as sample size increases the resources available to trace difficult-to-contact respondents decrease. For example, in the present study it could be proposed that rates of non-contact could have been reduced by reducing the sample size by (for example) 50% and spending the funding saved on improved methods of respondent tracing. The literature in this area suggests that there are a number of strategies that can be used to reduce non-contact rates in surveys. These include the use of personally addressed hand-signed letters 2, using repeat contact via telephone and mailing 3, using monetary incentives 4 or using face-to-face contact 5. A final issue that needs consideration is that the paper by Christensen et al. examines the impacts of non-response on estimates of population rates. While estimation of population rates is of general interest, the focus of many longitudinal studies is upon the associations between risk factors and outcomes. There is considerable reason to believe that estimates of such associations are less likely to be influenced by sample attrition than estimates of population rates. As shown by Gustavson et al., for sample attrition to adversely affect estimates of associations high levels of attrition must exist in both the risk factor and outcome measures 6. In summary, the paper by Christensen et al. 1 adds to the growing literature on sample attrition in longitudinal research. Consistent with most previous research, the study found that the impacts of sample attrition on estimates of population statistics are detectable but relatively modest. These findings clearly suggest that findings from longitudinal studies subject to attrition should be treated as providing ‘ballpark’ estimates of population parameters, and that care should be taken in drawing inferences based on these estimates. None.