Abstract

<p>Science is rooted in the concept that a model can be tested against independent observations and rejected when necessary (model validation). However, the problem of model validation becomes formidable when we consider probabilistic models that forecast the evolution of natural "open" systems, which are characterized by the ubiquitous presence of uncertainties of different kind and the impossibility to replicate the process in laboratory.</p> <p>The purpose of this talk is to clarify the conceptual issues associated with different types of uncertainty and probability in natural hazard analysis, as well as the conditions that make a hazard model testable and thus 'scientific'. We discuss the limits of the classical frequentist and subjective probabilistic frameworks to validate hazard forecasting models and discuss how these difficulties paved the way for some of the strongest criticisms of hazard analysis: arguments that most of natural hazard analyses cannot be validated and are intrinsically 'unscientific', and that <em>the outcome of natural processes in general cannot be accurately predicted by mathematical models (cit)</em>.</p> <p>We show that the proper validation of a forecasting model requires a suitable taxonomy of uncertainty embedded in a unified probabilistic framework. This taxonomy comprises three types of uncertainty: (i) the natural variability of the  system, usually represented as stochastic processes with parameterized distributions (aleatory variability); (ii) the uncertainty in our knowledge of how the system operates and evolves, often represented as subjective probabilities based on expert opinion (epistemic uncertainty); and (iii) the possibility that our forecasts are wrong owing to system processes about which we are completely ignorant and, hence, cannot quantify in terms of probabilities (ontological error). We compare this taxonomy with other conventions for describing uncertainties (i.e., the “likelihood” and “confidence” used by the International Panel of Climate Change), the link with the probability, and their estimation using data, models, and subjective expert opinion. We show that these different uncertainties, and the testability of hazard models, can be unequivocally defined only for a well-defined <em>experimental concept</em>, external to the model under test, that identifies collections of data, observed and not yet observed, judged to be stochastically exchangeable when conditioned on a set of explanatory variables.</p> <p>These theoretical issues are applicable to a wide range of natural hazards; here, for the sake of example, they will be discussed using real examples in volcanology, tsunami and seismic hazards.</p>

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.