Abstract
AbstractForecasting plays a vital role in intelligence assessment and contributes to national security decision‐making by improving strategic foresight. Remarkably, most intelligence organizations do not proactively track their forecasting accuracy and, therefore, do not know how accurate their forecasts are or what types of biases intelligence analysts (or organizations) might exhibit. We review research on geopolitical forecasting and a roughly decade‐long program of research to assess the accuracy of strategic intelligence forecasts produced by and for the Government of Canada. This research is described in three phases corresponding to previously published research, following which novel analyses (drawing from the data used in the earlier phases) are reported. The findings reveal a high degree of forecasting accuracy as well as significant underconfidence. These results were evident regardless of whether analysts assigned numeric probabilities to their forecasts. However, the novel analyses clarified that there is a substantial cost to accuracy if end‐users rely on their own interpretations of verbal probability terms used in the forecasts. We recommend that intelligence organizations proactively track forecasting accuracy as a means of supporting accountability and organizational learning. We further recommend that intelligence organizations use numeric probabilities in their forecasts to support better comprehension of these estimates by end‐users.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.