Abstract

After generating a large ensemble of palaeo ice sheet model runs, it is common to either rank the simulations, or classify each simulation as an acceptable match to observations or not. Both tasks require implicit human judgement, usually left to the discretion of the research authors. For instance, even numerical comparisons to reconstructions require human input on values for match thresholds and allowances for model-mismatch. We embrace the subjectivity of human judgement and calibrate an ice-sheet model by explicitly asking ∼100 experts to identify simulations that are good enough. Expert judgement is made based on a set of features of the model output that is of interest (for example, margin shapes and regional ice volumes); where possible we also record such knowledge. By seeking the input of many experts, we obtain a community consensus that can be used to develop guidance to determine the quality of future simulations. This short communication describes our exercise in seeking expert classifications of simulations of the Last Glacial Maximum (LGM) North American Ice Sheets, discusses the lessons learnt, and suggests future analysis of the responses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call