Abstract

Decision makers increasingly rely on forecasts or predictions generated by quantitative models. Best practices recommend that a forecast report be accompanied by a sensitivity analysis. A wide variety of probabilistic sensitivity measures have been suggested; however, model inputs may be ranked differently by different sensitivity measures. Is there some way to reduce this disparity by identifying what probabilistic sensitivity measures are most appropriate for a given reporting context? We address this question by postulating that importance rankings of model inputs generated by a sensitivity measure should correspond to the information value for those inputs in the problem of constructing an optimal report based on some proper scoring rule. While some sensitivity measures have already been identified as information value under proper scoring rules, we identify others and provide some generalizations. We address the general question of when a sensitivity measure has this property, presenting necessary and sufficient conditions. We directly examine whether sensitivity measures retain important properties such as transformation invariance and compliance with Renyi’s Postulate D for measures of statistical dependence. These results provide a means for selecting the most appropriate sensitivity measures for a particular reporting context and provide the analyst reasonable justifications for that selection. We illustrate these ideas using a large scale probabilistic safety assessment case study used to support decision making in the design and planning of a lunar space mission.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call