Abstract
In the presence of model risk, it is well-established to replace classical expected values by worst-case expectations over all models within a fixed radius from a given reference model. This is the approach. For the class of F-divergences, we provide a careful assessment of how the interplay between reference model and divergence measure shapes the contents of uncertainty sets. We show that the classical divergences, relative entropy and polynomial divergences, are inadequate for reference models which are moderately heavy-tailed such as lognormal models. Worst cases are either infinitely pessimistic, or they rule out the possibility of fat-tailed power law models as plausible alternatives. Moreover, we rule out the existence of a single F-divergence which is appropriate regardless of the reference model. Thus, the reference model should not be neglected when settling on any particular divergence measure in the robustness approach.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have