The literature on motor vehicle safety is vast. Consequently the review effort reported in this supplement was Herculean in scope and difficulty. It introduced me to many solid and important effectiveness studies. At the same time, it occasionally omitted effectiveness studies that I cite. Returning to my sources heightened my appreciation of the Task Force on Community Preventive Service’s (the Task Force) trials. The first trial was finding the studies. Two examples are informative. A National Center for Health Statistics publication 1 finds that 92% of low-income parents who own child safety seats use them routinely. That report, however, covers a wide range of parental safety practices. It lacks keywords and is not indexed. Again, an article in an economics journal 2 uses confidential 1983 National Personal Transportation Survey microdata to analyze how people make decisions about using motor vehicle safety equipment. The paper includes a logit regression explaining child seat use. One explanator is residence in a state with a child safety seat use law (in force in 1983 in 15 states housing 38.5% of the 934 respondents with children under age 5). The model focuses on the influence of individual factors like parent age, income, and education on seat use decisions, but in the process it produces the best extant evaluation of the impact those laws had on seat use. It finds that laws increased seatbelt use by 42.3%, with 17.7% diverted from belts and 24.6% restrained for the first time. These findings, however, are by-products. They do not appear in the abstract and merit only one sentence in the text. To the author, a restraint law was just another regression coefficient. How could a systematic search find these studies? The Task Force’s second trial was evaluations measuring different outcomes of comparable interventions. Despite many sound evaluations, the number using any single measure sometimes was dangerously small. Seemingly anomalous meta-analytic effectiveness estimates sometimes resulted. Most notably, when most child seat laws passed, child seat effectiveness was about 54% against fatalities and 52.5% against nonfatal injuries. 3 So child seat laws should decrease deaths and injuries proportionally and by roughly half the amount that use increases. Yet, in the studies reviewed in Table 2 of Zaza et al., 4 laws decrease deaths and injuries combined by 17.3% but decrease deaths alone by 35% and increase use by just 13% (24% if we add the Blomquist study 2 ). By dividing deaths and injuries by effectiveness, we can convert the estimates to compatible units. Doing so reveals that one study, which found 57.3% effectiveness against fatalities, must have been analyzing effectiveness among seat users (or else laws brought 100% seat use). Across the remaining studies including Blomquist, it appears that use increased by 35% at the median, reducing deaths and injuries by 18%. The third trial, which the Task Force handled extremely well, was co-mingled interventions. States do not legislate for the convenience of evaluators. Especially when attacking impaired driving, they often simultaneously implement a package of interventions. The Task Force had to reject some otherwise sound evaluations because studies either could not separate the effects of packaged changes or attributed improved outcomes to a subset of the actual package. It is unclear if we even should try to separate impacts of package components. Synergy may heighten their yield.
Read full abstract