Abstract

In this introductory paper to the special issue on crop model prediction uncertainty, we present and compare the methodological choices in the studies included in this issue, and highlight some remaining challenges. As a common framework for all studies, we define prediction uncertainty as the distribution of prediction error, which can be written as the sum of a bias plus a predictor uncertainty term that represents the random variation due to uncertainty in model structure, model parameters or model inputs. Several themes recur in many of the studies: Use of multi-model ensembles (MMEs) to quantify model structural uncertainty; Emphasis on uncertainty in those inputs related to prediction of regional results or climate change impact assessment; Simultaneous consideration of multiple sources of uncertainty; Emphasis on exploring the variability of uncertainty over space or time; Use of sensitivity analysis techniques to disaggregate the separate contributions to prediction uncertainty. Relatively new approaches include the estimation of both the bias and predictor uncertainty terms of prediction error, the construction of MMEs specifically designed to explore the uncertainty in model structure, the use of emulators for sensitivity analysis and the exploration of ways to reduce prediction uncertainty other than through model improvement. Major remaining challenges are standardization of approaches to quantifying uncertainty in model structure, parameters and inputs, going beyond studies of specific sources of uncertainty to estimation of overall prediction uncertainty, comparing and combining validation and uncertainty studies, and evaluation of uncertainty estimates. Looking forward, we suggest that assessment of prediction uncertainty should be a standard part of any modelling project. The studies here will contribute toward that goal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call