Abstract

When we trained in pediatrics during the era before synthetic surfactant, there was an old saw among neonatologists that African-American infants, when born remote from term, were more likely to survive than White infants born after pregnancies of comparable duration. It was said that African-American infants seemed less likely to develop neonatal respiratory distress syndrome, and, when they did, it was less severe. Old clinical saws are often wrong, but when the Centers for Disease Control linked the birth and infant death certificates for the 1980 birth cohort, the clinical impression of the neonatologists proved correct. Whether maturity was measured by birth weight or by duration of gestation, small or preterm African-American infants were in fact less likely to die than comparable White infants (1). However, within the “normal range” (≥2,500–3,000 g or ≥37 weeks’ gestation), African-American infants were substantially more likely to die than their White counterparts (1). The survival advantage of preterm African-American infants has diminished, but is still present, in the era of synthetic surfactant (2). The “crossing curves” of infant mortality have now been well documented and are generally considered a paradox, a statistical artifact “lacking biological plausibility” (3) to be understood only to the extent that they can be explained away. On these lines, Wilcox and Russell (4, 5) in the 1980s demonstrated that when birth-weight-specific mortality curves for African-American and White infants cross, simple standardization can produce biased results; however, when the birth-weight-specific mortality curves are each standardized to their own internal distribution, the curves become parallel and African-American infants experience higher mortality at each point of relative birth weight. The results were extended to relative gestational age by Hertz-Picciotto and Din-Dzietham (6) in 1998. In this issue of the Journal, Platt et al. (3), in an extension of previous work by Joseph et al. (7), take a different tack to uncross the curves. They report that a simple change of the denominator from all livebirths at a given gestational age to all fetuses and infants alive at the beginning of that gestational week can eliminate the crossover of gestational-agespecific mortality curves. Their method is also applied to birth-weight-specific mortality, with similar results. The Wilcox-Russell and the Joseph-Platt et al. approaches both assume that the crossover is not merely a simple artifact of differential errors in reporting or measurement of birth weight or gestational age. More importantly, although totally different, both approaches explicitly assume that uncrossing the curves is important. When that assumption is made, the appropriateness of the underpinnings of each corrective approach is left unquestioned. The Wilcox-Russell approach assumes that the observed birth weight or gestational age distribution (or at least the “predominant” distribution—the underlying Gaussian distribution that comprises the vast majority of births (4)) is biologically “normal” for that population. That assumption raises several questions. Should different populations have different gestational age or birth weight distributions? What defines a “population”? The answers to those questions are fundamental to determining the appropriateness of the Wilcox-Russell approach. The original Joseph-Platt et al. approach assumes that, contrary to common practice, the population “at risk” for fetal and particularly neonatal death is not just liveborn infants but rather all fetuses and infants alive at the beginning of that gestational week; that approach seemingly ignored the concept and process of birth. Thus, for example, it equates an infant born at 28 weeks’ gestation who subsequently died at 4 weeks of age with a 32-week stillbirth (7). The model presented by Platt et al. (3) goes a

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call