Abstract

Acknowledging the conditionality of model-based evidence facilitates the dialogue between model developers and model users, especially when models are used to guide decisions at the science-policy interface. In general, model users have limited access to verify the realism of a model, being only exposed to model plausibility and trustworthiness; instead, modellers have an an array of validation and verification techniques available. In the end, model credibility is what both developers and users aim for, also in the interest of shielding from the possible pitfall of over-interpreting the model results. To this end, in this contribution we discuss sensitivity auditing, an extension of sensitivity analysis, that can help model developers and users to overcome communication barriers and foster dialogue around modelling activities. The use of sensitivity auditing is not limited to models in a restricted sense, but it can be applied to any policy-relevant instance of quantification, including metrics, rankings and indicators. We present six real-world applications of sensitivity auditing to instances of quantification in a range of socio-environmental systems, including public health, education, and the water-food nexus. These examples reveal the usefulness of sensitivity auditing in facilitating the proper use of numbers and models at the science-policy-society interface and in avoiding uncertainty laundering.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call