Abstract

Debates over uncertainty communication have long pervaded the intelligence discourse and became acute following major intelligence failures; namely, the 9/11 attacks and Iraq WMD fiasco. To address the goal of mitigating subjectivity and the potential for miscommunication, some intelligence organizations have developed standardized lexicons for communicating estimative probability. However, as is the case with standards developed for other facets of uncertainty communication in intelligence (e.g., information credibility, source reliability, analytic confidence), these standards are rarely grounded in empirical research, and they may in fact undermine rather than improve communication fidelity. One overriding reason for our pessimistic outlook is that, for the most part, the set of current standards represents various versions of vague verbiage: all current intelligence standards for communicating estimative probabilities commit to using verbal probabilities (words such as “likely” or phrases like “realistic possibility”) and they shun the use of numerical probabilities either as precise estimates (e.g., 73% chance) or imprecise estimates (e.g., a 60%-80% chance). In this chapter, we present an annotated collection of the estimative probability standards gathered by members and affiliates of NATO’s SAS-114 Research Task Group on Assessment and Communication of Uncertainty in Intelligence to Support Decision-Making. These include standards used in intelligence production, as well as in other domains such as defence and security risk management and climate science. After reviewing this non-exhaustive collection of standards, we discuss common problematic features and how they might compromise efforts to support decision making.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call