Abstract

Safety–critical systems cannot afford to wait for data from multiple high-consequence events to become available in order to inform safety recommendations. Counterfactual reasoning has been widely used in system safety to address this issue, enabling the incorporation of evidence from single events with an analyst’s current knowledge of a system to learn from past events. However, current counterfactual methods have been criticized for making analysts prone to linearizing and oversimplifying complex events. In order to overcome these limitations, this work establishes a novel probabilistic approach to counterfactual reasoning called “possible worlds” counterfactuals. This methodology enables the integration of an analyst’s causal knowledge about a system (in the form of a Bayesian network-based risk assessment model) with the best available evidence about an event of interest (e.g., an accident). As a result, counterfactual hypotheses, commonly used in the practice of system safety, can now be rigorously assessed through causally-sound probabilistic methods. We demonstrate the capabilities of “possible worlds” counterfactuals with a real-world case study on the 2018 Sun Prairie gas explosion and show how this approach can provide additional lessons and insights beyond those provided by authorities at the time of the event.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call