While theoretical considerations show that the effectiveness of occupant protection devices declines from 100% at very low crash severity to 0% at high severity, empirical details have been lacking. When overall in-use effectiveness is estimated by applying traditional methods to data sets that lack a measure of severity, large biases are introduced because non-wearing drivers are riskier drivers, an effect that has been called selective recruitment. These effects are investigated empirically using National Accident Sampling System (NASS) data in which crash severity is measured by delta-v, the estimated change in the speed of the car as a result of the crash. Supplemental results are obtained using published police-reported data containing a more easily obtained but less objective severity measure. Both data sets provide information on driver fatalities and injuries, thus allowing four comparisons of effectiveness estimates based only on total casualties with ones taking into account the different severities of crashes by belted and unbelted drivers. The data show consistently that the probability that a driver is belted declines as crash severity increases. Belt effectiveness estimates ignoring this effect are biased upwards by large amounts (for example, 60% compared to 40% for injuries using NASS data). Belts appear more effective at preventing fatalities than at preventing injuries. The results are consistent with a prior estimate, derived using a method unaffected by the biases discussed here, which found that, averaged over all crashes, safety belts reduce driver fatality risk by (42 ± 4).
Read full abstract