Abstract

Dear Editor: In their recent article, Hebert et al. (1) ask their readers to consider the value of self-reported dietary data (SRDD) in informing public health policy while stating that our challenge to the validity of these data (2) is due to “ignorance” and is “reminiscent of protests by the tobacco industry and its allies” (1). These opinions notwithstanding, we think that in science, value and validity are best determined by data and empirically supported arguments. Hebert et al. reference our article (2) as they seek “to identify specific issues raised by these authors with respect to putative flaws in dietary assessment….” Although we reiterated the well-established flaws of the NHANES SRDD (e.g., intractable systematic biases, inconsistent trends in misreporting), our main finding was that both over- and underreporting were sufficiently pervasive to conclude that these data are not valid for any inferences regarding energy intake and the etiology of the obesity epidemic. Remarkably, on this point, Herbert et al. are utterly silent. They provide no data to challenge our findings that “[a]cross the 39-y history of the NHANES, [energy intake] data [for] 67.3% of women and 58.7% of men were not physiologically plausible” (2). Nevertheless, a recent editorial in the British Medical Journal concurred with our results by suggesting that the NHANES data are “incompatible with life” (3). These reports (2, 3) and others (4, 5) support the conclusion that the ability to generate empirically supported public policy from implausible data is extremely limited. This commonsense position is reinforced by a large body of research that demonstrates that nutrition surveys suffer from severe, intractable systematic biases (5, 6) that cannot be overcome with statistical techniques, however sophisticated. For example, energy adjustments were demonstrated to be inadequate to correct for differential recall bias (7). Importantly, SRDD are based on the naive assumption that human memory and recall provide literal, accurate, and precise reproductions of past ingestive behavior. This assumption is indisputably false (8, 9). In fact, SRDD methods require participants to submit to protocols that mimic procedures known to induce false recall (10). As such, it is impossible to quantify what percentage of the recalled foods and beverages represent completely false reports, are grossly inaccurate, or are somewhat congruent with actual consumption. Given these facts, post hoc statistical machinations are merely number-generating exercises that improve correlations without improving the actual data. Recently, strong proponents of SRDD protocols provided data that demonstrate the futility of these methods (11). In Freedman et al. (11), the squared average correlation between “true” energy intake and self-reported energy intake ranged from 0.04 to 0.10. The trivial relations between the proxy estimates (i.e., self-reported energy intake) and its referent (i.e., actual energy intake) provide unequivocal evidence that SRDD offer an inadequate basis from which to draw scientific conclusions (6). Importantly, energy intake is the foundation of dietary consumption, and therefore all nutrients must be consumed within the quantity of food and beverages needed to meet minimum energy requirements (12). As such, with mixed diets it is an analytic truth that dietary patterns (i.e., macro- and micronutrient consumption) are differentially misreported when total energy intake is misestimated [e.g., protein (13), fiber (14), cholesterol (14), calcium (15), iron (16), zinc (17), and sodium (18)]. Given these results, the assumption that SRDD can be used to examine dietary patterns is not logically valid. There are errors of fact in the article by Hebert et al. that warrant correction. Their assertion that we incorrectly applied the “Goldberg cut-off” (19) is patently false. In Table 6, page 577 of the article by Goldberg et al. (19), the suggested Energy intake/Basal metabolic rate (EI/BMR) cutoff is 1.50 for a single 24-h dietary recall (24HR) when BMR is “predicted from the Schofield equations” with a sample size of ≥300 (19). As we reported (2), the 1.35 cutoff we used was more liberal than what Goldberg et al. suggested, and given the reduced sensitivity, we captured fewer under-reporters. With the suggested cutoff of 1.50, the underreporting increased to ∼70% for the entire sample and ∼76% and ∼83% for obese men and women, respectively. These results demand the question, What is the value of NHANES dietary data when >80% of obese women’s self-reported energy intakes are physiologically implausible? The second factual error is the erroneous statement that additional 24HRs improve estimates. In our analyses, the mean estimates for the second 24HR in every NHANES wave from 2001 to 2010 exhibited a significantly greater level of underreporting than the first. These results are well known and in agreement with the Observing Protein and Energy Nutrition study that “showed greater underreporting” in the second administration (13). Given the totality of our empirically supported arguments, we find Hebert et al.’s defense of the status quo an impediment to both scientific progress and empirically supported public nutrition policy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call