Abstract
State-dependent importance sampling algorithms based on mixtures are considered. The algorithms are designed to compute tail probabilities of a heavy-tailed random walk. The increments of the random walk are assumed to have a regularly varying distribution. Sufficient conditions for obtaining bounded relative error are presented for rather general mixture algorithms. Two new examples, called the generalized Pareto mixture and the scaling mixture, are introduced. Both examples have good asymptotic properties and, in contrast to some of the existing algorithms, they are very easy to implement. Their performance is illustrated by numerical experiments. Finally, it is proved that mixture algorithms of this kind can be designed to have vanishing relative error.
Submitted Version (
Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have