Abstract

In January, major cities on the East Coast of the United States braced for large snowfalls as Winter Storm Juno approached. Fortunately, the actual snowfall was much less than had been confidently predicted in many cities. In hindsight, it seemed clear that substantial costs of closing businesses and preparing for a blizzard could have been avoided if forecasts been more accurate. What happened? Why were so many forecasts both confident and wrong? Bob Winkler, who discussed aggregation of expert judgments in the January issue of Risk Analysis, describes how over-weighting of the more extreme forecasts among many different model-based forecasts that were available contributed to overestimation of snowfall and of confidence in the forecast. How best to characterize uncertainties in model-based risk predictions when multiple more-or-less plausible models give very different results is a crucial challenge for the field, with possible applications ranging from dose–response modeling to climate change forecasting. It is a repeated theme in recent and forthcoming articles in Risk Analysis. In this issue, it is picked up in papers by Smith and Gans on improving characterization of uncertainty in estimates of health risks from air pollution when there is significant epistemic uncertainty about the most appropriate model; and by Kim et al. on using posterior averaging of multiple models to estimate benchmark doses when hormesis may or may not be present. LeClerc and Joslyn examine the effects of false positives (“crying wolf”) on subsequent weather-related decisions about whether to salt roads in anticipation of freezes. Risk management decision-making under model uncertainty with very different predictions from models that are all roughly equally credible will continue to be an important foundational issue in risk analysis, and the editors encourage submissions that address this topic. How much freedom to act should we exercise and how much duty to take care and issue warnings do we owe each other when our actions might cause harm to others? More generally, how do ethical considerations intersect with risk analysis in situations where both have something to say about how to make or constrain choices whose consequences affect others? Doorn discusses the field of risk ethics, arguing that it should pay more attention to normative principles for making decisions that affect the distribution of consequences from natural hazards, or the distributions of risks, responsibilities, vulnerability, protective investments, or human capabilities, rather than focusing exclusively on technological hazards. Several other papers in this issue provide analytic techniques for quantifying the equity of risk distributions. Chatterjee et al., in a paper discussed further below, consider equity in the distribution of terrorism risk reduction benefits and costs. Zolfaghari and Peychaleh use operations research methods (two-stage stochastic programming) to include equity as a constraint in optimizing the allocation of funds to mitigate earthquake risks in Tehran by structural retrofitting. This constrained optimization model allows the additional costs needed to achieve more equitable risk distributions to be quantified (or, for a fixed budget, it could be used to quantify the increases in expected lives lost or other harm done in the event of an earthquake that would be required to achieve greater equity in the allocation of risk-mitigation funds). For Tehran, the authors find that costs increase modestly with equity until an “elbow” in the curve is reached where further increases in equity require large increases in budget. The expected number of deaths or morbidities caused each year by exposure of a population to fine particulate matter (PM2.5) can be estimated by multiplying the number of people exposed to each level of PM2.5 by the excess risk of mortality or morbidity per person per year associated with that level of exposure, as estimated from epidemiological models. But suppose that the models might be wrong, or that there is epistemic uncertainty about whether they are correct, in addition to the usual random (aleatory) uncertainty due to sampling? For example, suppose that the shape of the function relating PM2.5 concentrations to excess risks is uncertain, or that the functions needed to describe the contributions to risk of different constituents of PM2.5 are not known. How should such model uncertainty be included in risk calculations, characterized in the outputs, and summarized or displayed to policy makers? Smith and Gans argue that EPA's Benefit Mapping and Analysis Program (BenMAP) software tool has evolved to allow estimated health benefits from reducing PM2.5 exposures to be calculated with increasing geographic resolution, while doing too little to encourage understanding or characterization of the epistemic uncertainties about such benefits estimates, e.g., due to model uncertainties. They suggest giving users more flexibility (and responsibility) in choosing an appropriate concentration-response function to encourage better recognition of the wide diversity of estimates in the literature and appreciation of the importance of this crucial choice to resulting benefits estimates, rather simply providing a few choices from a library of prepopulated choices. Fann et al. of the EPA, representing BenMAP developers and users, welcome the suggestions for improvements in BenMAP uncertainty analysis, but note that the defaults provided by EPA are extensively documented and have been vetted by bodies including the EPA Science Advisory Board and the National Academy of Sciences, while key issues, such as whether associations between exposures and health effects are causal, have been decided using a highly structured approach. Smith and Gans respond that the question of whether the modeling assumptions used are correct is still crucial, and uncertainties about them and about resulting benefits estimates—which are emphasized rather than dismissed by the National Academy of Sciences or by the documentation provided—should be more clearly displayed to decision-makers. EPA's benchmark dose software (BMDS) is often applied assuming that dose–response relations are non-decreasing, but there is substantial empirical evidence that U-shaped or J-shaped dose–response functions (“hormesis”) occurs for many biological systems and endpoints. Kim et al. show how to extend a well-developed approach to taking model uncertainty into account, Bayesian model averaging (BMA), to the problem of estimating benchmark doses when hormesis is considered a realistic possibility. They illustrate model averaging for a linearized multistage dose–response model for both monotonic and potentially hormetic dose–response relations, using both simulation data and real data on rat carcinogenicity of dioxin (TCDD). After a release of radioactivity contaminates various foods, such as milk and spinach, how should levels of radioactivity in the foods be monitored over time, as they gradually decline toward background levels, to best protect consumers? Motivated by this problem in the wake of Fukushima, Seto and Uriu use simulation modeling to compare three different food sampling protocols. They conclude that potential risks from consumption of food with greater-than-background radiation levels are minimized by a protocol that allocates sampling effort across foods (spinach and milk, in the example considered) based on the estimated concentrations of radioisotopes calculated from the empirical apparent decay rates revealed by weekly monitoring data. Several articles in this issue illuminate aspects of risk perception and risk communication. Guignet and Alberini asked homeowners in the United Kingdom and Italy to make tradeoffs between hypothetical home prices and mortality risks from air pollution, and observe that the resulting estimated value of a statistical life saved is about three times higher in Italy than in the United Kingdom, and that Italian but not U.K. respondents also valued reductions in cancer mortality risks especially highly. Wernstedt and Murray-Tuite examine how risk perceptions changed over time following a fatal collision in 2009 between two trains in Washington DC's Metrorail system. In addition to confirming findings that might be expected from the literature, such as greater aversion to transportation fatality risk among women than men, among people with less experience using the system and among those with lower household incomes, the authors also report the less expected finding that aversion to fatalities appeared to increase over the months following the accident. Johnson et al. examine how perceived blame for a hypothetical case of Salmonella contamination of food is allocated to different parties involved in food safety, from farmers to sellers to preparers to government agencies. Unsurprisingly, trust in a party reduces perceived blame; more intriguingly, perceptions that the party was aware of the contamination and free to act to reduce it increase blame. The effects are weak in this study, and further modeling of attribution of blame to institutions is likely to be worthwhile. (The key lesson from food safety experts, that adequate cooking of food and kitchen hygiene are crucial to microbial safety, did not appear to exert much influence, as preparers were not singled out for most of the blame.) LeClerc and Joslyn use an experiment to examine how decision-making about whether to salt roads in case of freezing varies with false positives from a decision aid, when participants use weather forecasts and the decision aid's advice to decide what to do. They find that only very high or very low false alarm rates lead to significantly inferior decision-making, but that adding probabilistic uncertainty estimates to forecasts increases both compliance with recommendations and quality of decisions. How strongly do perceptions of risks affect behaviors such as preparing for floods? de Boer et al. provide evidence from an experiment in the Netherlands that people are more likely to be sensitive to information about flood-related precautions if framing of the context for the information (taking preventive measures to mitigate flood risks) is first used to induce a prevention focus; then, perceived vulnerability of the recipient and efficacy of the precautions affect perceived relevance and help to predict responsiveness to the information provided. The authors relate motivation to be well-prepared for floods to other aspects of world view, such as trust in authorities to manage risks competently, beliefs about the local impacts of climate change, and chronic prevention and promotion orientations, and identify different clusters of people who have different sensitivities to precaution communications based in part on differences in such factors. “Defense in depth” is a well-known principle for designing defenses against deliberate attacks: if one layer of defense fails, another is ready to thwart the attack. But how should such a layered system be designed if countermeasures in the same layer and across layers interact? If some potential countermeasures are complementary, or partly redundant, then selecting an affordable subset to implement may require evaluating and comparing many combinations. Chatterjee et al. apply portfolio decision analysis to this challenge, noting its similarities to challenges in other areas (such as pharmaceutical product development, oil and gas, and military investments) in which the value of an investment in one costly activity depends on which others are successfully completed. They develop an investment optimization model that takes as inputs judgments about the relative probabilities of different attack scenarios and the relative effectiveness of different security measures. It delivers as an output a set of potential investments that are not clearly dominated by other possible choice. In addition, the outputs allow inequities in the distributions of costs and benefits of risk-reducing investments and spill-over effects of investments on local economies and communities to be considered. By applying operations research (including portfolio optimization), structured judgment elicitation, “system of systems” risk analysis, and decision analysis concepts to the difficult problem of designing layered defenses, this article advances the state-of-the-art of practical methods for taking interactions among defensive measures into account in deciding how best to allocate limited defensive resources to obtain the greatest achievable risk reduction. Taleb's metaphor of black swan events has inspired much reflection by risk analysts and others on the importance of heavy-tailed distributions and of events that cannot be predicted well by extrapolating from past observations. His new book on “antifragility” points out that it is often much easier to determine whether a system is fragile (i.e., easily broken, likely to be harmed by random shocks) than to predict when or whether an event that would break it might occur. This raises the questions of how and under what conditions it is possible to design systems that are “antifragile,” not only in the sense of being less easily broken, but in the more interesting sense that they are more likely to benefit from random shocks than to be harmed by them. This can happen when risk managers learn from the system's response to shocks how to improve its future performance. Aven reviews key concepts of antifragility, relates them to more familiar ideas of vulnerability and resilience in risk analysis, and notes that the concept adds to the risk analyst's toolkit by emphasizing the importance of uncertainty and learning in improving risk management of systems over time. Freiria et al. consider the applied problem of improving the resilience of road networks that may be disrupted by large-scale events such as forest fires, floods, or mass movements. They propose an approach to setting risk management priorities for roads that connect people to health services, taking into account the fact that different attributes of the network are important on different geographic scales (local and regional) of the road network and disruptive events. Their model shows how local drive times increase in the event of each of these natural disasters, using a case study of roads in Portugal. Both planning and responses to such events can be improved by taking into account the local and regional effects of disasters on the road network. This issue ends with a book review by past Editor-in-Chief Michael Greenberg of current Editor-in-Chief Tony Cox's recent book Improving Risk Analysis. Michael finds the book valuable for its proposed principles for sound risk analysis and for its critique of common errors in risk analysis practice, especially, over-reliance on expert judgment and under-reliance on careful causal analysis and modeling when there is considerable uncertainty about true causal relationships. He also mentions several useful chapters on modeling chronic obstructive pulmonary disease (COPD), improving terrorism risk analysis, and making better risk management decisions under uncertainty, as well as other applications, many of which have appeared in Risk Analysis articles over the past decade. He concludes that the books is “not for a novice,” but is valuable on several counts and likely to be stimulating to practitioners, whether or not they agree with its policy-related conclusions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call