Abstract

Integrated Assessment Models (IAMs) play an important role in climate policy decision making by combining knowledge from various domains into a single modelling framework. They serve as tools for informing and evaluating policies on the basis of economic, climatic and other interdisciplinary model components. However, IAMs have been criticised for simplifying assumptions, reliance on negative emission technologies, as well as for their power of shaping discourses around climate policy. Given these controversies and the importance of IAMs for international climate policy, model evaluation is essential in order to analyse how well IAMs perform and what to expect of them. While different proposals for evaluating IAMs exist, they typically target a specific subtype of model and are mostly reliant on a combination of abstract criteria and concrete evaluation methods. I enrich these perspectives by reviewing approaches from the philosophy of modelling and analysing their applicability to three canonical models covering the wide range of IAM types: DICE, REMIND, and IMAGE. The heterogeneity of IAMs and the political and ethical dimensions of their applications imply that any single evaluation criterion can not capture the complexities of IAMs. In order to allow for the inclusion of ethical, political and paradigmatic dimensions into the evaluation procedure, I take a closer look at model expectations, which I define as the conjunction of user aims, modelling purposes and evaluation criteria. Through this lens, I find that there is indeed a mismatch between model expectations and model capabilities. While DICE is a useful tool for investigating the effects of different assumptions, it should not be expected to provide quantitative guidance. IMAGE, on the other hand, has proven to be suitable for projecting environmental impacts, but should not be expected to analyse questions that require a description of macroeconomic processes. REMIND can be used for an assessment of different theoretically possible mitigation pathways, but should not be expected to provide accurate forecasts. I argue that this identified mismatch between what models can do and what is expected of them should be tackled by adjusting expectations to what IAMs can actually deliver, not by trying to make the models live up to outsized expectations. The main vehicle for adjusting expectations is a comprehensive and informative model commentary, that is, an account of the model's appropriate domain of application, critical modelling choices and assumptions, as well as the admissible interpretations of model results. However, I find that the analysed IAMs fail to deliver such a comprehensive and informative model commentary. Expectations for IAMs are often not clearly formulated, due to hard-to-assess user aims, vague purpose statements and opaque ethical dimensions. As clear expectations should form the basis of further evaluations of IAMs, I conclude that integrated assessment modellers should place much more emphasis on their model commentaries, with a special focus on the interpretation of IAM results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call