Abstract

Risk management requires a balance between knowledge and values. Knowledge consists of justified beliefs and evidence, with evidence including data, assumptions, and models. While quality and integrity of evidence are valued in the sciences, risk science involves uncertainty, which suggests that evidence can be incomplete or imperfect. The use of inappropriate evidence can invalidate risk studies and contribute to misinformation and poor risk management decisions. Additionally, the interpretation of quality and integrity of evidence may vary by the risk study mission, decision-maker values, and stakeholder needs. While risk science has developed standards for risk studies, there remains a lack of clarity for how to demonstrate quality and integrity of evidence, recognizing that evidence can be presented in many formats (e.g., data, ideas, and theories), be leveraged at various stages of a risk study (e.g., hypotheses, analyses, and communication), and involve differing expectations across stakeholders. This study develops and presents a classification system to evaluate quality and integrity of evidence that is based on current risk science guidance, best practices from non-risk disciplines, and lessons learned from recent risk events. The classification system is demonstrated on a cyber-security application. This study will be of interest to risk researchers, risk professionals, and data analysts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call